From Framework to Live Product
in 40 Hours

Harrison Painter | LaunchReady.ai | March 2026

35+
Questions
40 hrs
Build Time
~$133
Monthly Cost
7
Levels
Download PDF

Last updated: March 21, 2026

The Problem

Most professionals know AI matters. Almost none of them can tell you where they actually stand.

The existing options are not helpful. On one end, you get ten-question quizzes that tell you nothing. On the other, you get academic instruments designed for researchers, not business leaders. Nothing in between measures practical AI proficiency in business terms and tells you specifically what to learn next.

I needed an assessment that sorted people into a clear framework, gave them an identity they wanted to share, and pointed them toward the next step. It did not exist. So I built it.

The Decision

I had been developing the 7 Levels of AI Proficiency framework as the foundation for all of LaunchReady's training programs. The framework was solid. But frameworks only matter if people can place themselves in one.

I decided to build a full adaptive assessment -- not a toy quiz, but a real psychometric instrument with scenario-based questions, weighted scoring, and a progression model that stops when it finds your ceiling.

The goal: someone takes the assessment, gets a result that feels accurate, sees a badge worth sharing, and lands in my email system tagged by level. One product that does lead generation, audience segmentation, and brand building simultaneously.

Level 5: Design Thinker -- I was not building a feature. I was designing an experience that would create its own distribution.
About the 7 Levels

This case study references the 7 Levels of AI, a proficiency framework developed by LaunchReady.ai that maps how professionals progress from basic AI usage to full orchestration. Each level is defined by a human skill, not a technical one. The inline callouts throughout this document show which level a specific decision or action represents. A full reference is included at the bottom, or take the free assessment at assess.launchready.ai to find your level.

What I Built

Two production websites in 10 days.

1. assess.launchready.ai -- AI Proficiency Assessment

  • Adaptive 7-level assessment (5 questions per level, stops at your ceiling)
  • 35+ research-based scenario questions with weighted scoring
  • Guttman + partial credit scoring model (server-side only)
  • 7 photorealistic challenge coin badge designs
  • Dynamic OG share card generation (1200x630 PNG per result)
  • LinkedIn-optimized viral share copy
  • Supabase database with migrations and session tracking
  • API routes with rate limiting
  • Turnstile CAPTCHA on email submission
  • Kit integration that auto-tags subscribers by level
  • Email drip sequences for post-assessment nurture
  • localStorage persistence so you can resume mid-assessment
  • Shareable results with personalized strengths and next steps

2. launchready.ai -- Full marketing website rebuild

  • Programs, coaches, testimonials, pricing tiers
  • Assessment showcase integrated across all pages
  • Full SEO (meta tags, OG, JSON-LD, canonical URLs, sitemap, robots.txt)
  • GA4 tracking

How It Happened

DayWhat Shipped
Mar 7LaunchReady.ai v2 -- full marketing site, program cards, coach bios, testimonials, pricing
Mar 8Polish -- name fixes, layout tweaks
Mar 10Assessment v1 -- Next.js scaffold, Supabase schema, results page, Turnstile, rate limiting
Mar 11Email gate + Kit integration, branding, assessment redesign, drip sequences
Mar 12OG share images, trajectory map, badge images, v3 questions, scoring fixes, SEO
Mar 13localStorage persistence, audit cleanup
Mar 16Escalator v2 -- adaptive 7-level system, challenge coins, share card redesign, results page overhaul, LinkedIn viral copy, home page rewrite

V2 was built on top of V1's infrastructure in about 3-4 hours. Same database. Same email pipeline. Same deployment. That is the compound effect of building systems, not just features.

Level 6: Systems Integrator -- V1 was a product. V2 was an upgrade to an existing system. The second version took a fraction of the time because the architecture was already in place.

What 40 Hours Actually Looks Like

People hear "AI built it" and assume I typed a prompt and walked away. Here is what I actually spent 40 hours doing.

Assessment design and psychometric research (~10 hours). Designing the 7-level framework. Writing scenario-based questions that test real business judgment, not trivia. Building the Guttman + partial credit scoring model. Calibrating difficulty across levels. AI drafted questions, but I evaluated every single one against the framework, rewrote weak scenarios, adjusted answer weights, and killed anything that felt like a quiz instead of a real assessment. This is I/O psychology work. The human judgment is the product.

Level 5: Design Thinker -- The assessment framework itself is a designed experience. Every question had to feel like a real business situation, not a textbook exercise.

Product design and UX (~8 hours). How should the assessment flow? When do you show results? Where does sharing go? What creates the emotional high that drives virality? These decisions came from studying what makes assessments like MBTI, CliftonStrengths, and Spotify Wrapped spread. I mapped the emotional arc of the experience before writing a line of code.

Visual design and brand (~6 hours). Seven challenge coin designs that feel earned, not given. OG share cards optimized for LinkedIn's feed layout. Brand consistency across two sites. Every visual was a judgment call about identity and perceived value. Military coins are circular -- LinkedIn-native. Dark navy pops against white feeds. These are not random aesthetic choices.

Content strategy and copywriting (~5 hours). Landing page copy. Results page personalization for each of the 7 levels. LinkedIn share text engineered for the "there are 7 levels" pattern. Email drip sequences. Writing for conversion while sounding like a real person talking to another real person.

Technical architecture and iteration (~6 hours). Database schema decisions. API design. Email integration. Security (rate limiting, Turnstile, server-side scoring). Debugging edge cases. The technical work was real, but it moved fast because I was making architecture decisions and Claude was writing the implementation.

Quality assurance and competitive analysis (~5 hours). I benchmarked every surface against 16Personalities, CliftonStrengths, and Spotify Wrapped. Scored the landing page, results page, and OG share card on structured rubrics. Found gaps. Closed them. Final scores: Landing 78/100, Results 80/100, OG Card 82/100. The biggest remaining gap is social proof -- no testimonials or taker count yet.

Level 3: Critical Thinker -- Benchmarking against the best in the category and scoring your own work honestly is how you close quality gaps that AI alone will not catch.

What This Would Cost Traditionally

ApproachCostTimeline
Freelancers/specialists (5-6 people)$10,000 - $25,0003-4 months
Agency$15,000 - $35,0004-6 months
Harrison + Claude~$133/month + 40 hours10 days

The freelancer estimate covers an I/O psychologist for assessment design, a frontend developer for two sites, a graphic designer for the coins and share cards, a copywriter, and an SEO specialist. The agency estimate adds project management overhead and margin. Both are based on published 2025-2026 rate data from Salary.com, Clutch.co, and industry benchmarks.

The Decisions That Shaped It

Adaptive vs. fixed assessment.

V1 had 27 fixed questions. Everyone answered everything. V2 adapts -- 5 questions per level, stops when you hit your ceiling. It respects the user's time, produces a more accurate result, and makes the conversation more interesting. "What level did you get?" is a better question than "what was your score?"

Challenge coins vs. generic badges.

Military challenge coins feel earned. They are circular, which is native to LinkedIn profile frames and social feeds. They have a collectibility factor. Dark metallic on navy blue pops in a white feed. Generic badges feel like participation trophies.

Share section at position number two.

Most assessment sites bury sharing at the bottom. But the emotional high is right after seeing your result. That is when people share. I put the share section immediately after the hero badge -- before strengths, before next steps, before anything else.

Strengths spotlight vs. radar chart.

Radar charts look analytical, but they do not drive sharing. Research on viral assessments shows identity reinforcement ("here is who you are") drives sharing far more than data visualization. Strengths do that. Charts do not.

Level 5: Design Thinker -- Every one of these decisions is about human psychology, not technology. The viral mechanics are designed into the product, not bolted on after.

"There are 7 levels" opening hook.

Every LinkedIn share opens with the same line. When multiple people share, it creates a recognizable pattern in feeds -- the same mechanic that made "I'm a [type]" go viral for MBTI.

Level 6: Systems Integrator -- The email tagging by level means every person who takes the assessment enters my system pre-segmented. I can send Level 2 people different content than Level 5 people. The assessment is not just a product -- it is a segmentation engine.

What This Means for You

This is not a story about AI replacing professionals. An I/O psychologist with 20 years of experience would build a better assessment than mine. A senior developer would write cleaner code. A brand designer would create more polished visuals.

But I did not need perfect. I needed real -- a working product, live in the world, collecting users and building my email list while I figure out what to improve.

The 40 hours I spent were not spent prompting. They were spent thinking. Designing. Evaluating. Deciding what to build, how it should feel, and what "good enough to ship" actually looks like. AI handled the implementation. I handled the judgment.

That is Level 6. The human is not in the loop.
The human IS the loop.

Tools: Claude Code (Opus 4.6), Claude API, Next.js 15, Vercel, Supabase, Tailwind CSS v4, Kit v3, Cloudflare Turnstile, GA4, GitHub

Frequently Asked Questions

What is an AI proficiency assessment?

An assessment that measures practical AI skills across a structured proficiency framework. The LaunchReady AI Proficiency Assessment uses scenario-based questions, adaptive difficulty, and a 7-level model to determine where you stand and what to learn next.

How much does it cost to build an assessment product?

Traditional cost ranges from $10,000 to $35,000 using freelancers or an agency, requiring 3-6 months. Harrison built the LaunchReady AI Proficiency Assessment for approximately $133 per month in infrastructure costs, completing the full product in 40 hours across 10 days.

Can you build a psychometric assessment with AI?

Yes, with human expertise directing the design. AI can draft questions and generate code, but the framework design, scoring model calibration, scenario quality evaluation, and competitive benchmarking all require human judgment. Harrison evaluated every question against the 7 Levels framework and rewrote weak scenarios.

What are the 7 Levels of AI?

A proficiency framework by LaunchReady.ai that maps how professionals progress from basic AI usage to full orchestration. The seven levels are: Cadet (AI Aware), Ensign (Prompt Engineer), Lieutenant (Critical Thinker), Commander (Context Engineer), Captain (Design Thinker), Admiral (Systems Integrator), and Mission Director (AI Orchestrator). Each level is defined by a human skill, not a technical one.

The 7 Levels of AI

A proficiency framework that maps how professionals progress from basic AI usage to full orchestration. Each level is defined by a human skill, not a technical one.

Level 1: Cadet (AI Aware)
You know AI exists and you have tried it. You type requests the way you would type into a search engine. The outputs feel hit-or-miss because they are.
Human skill: Self-awareness. Knowing what you do not know.
Level 2: Ensign (Prompt Engineer)
You give AI clear instructions with context, constraints, and format. Your results are better than most because your inputs are better. But you are still treating AI like a vending machine.
Human skill: Structured thinking. You organize your thoughts before giving them to AI.
Level 3: Lieutenant (Critical Thinker)
You use AI as a thinking partner. You ask follow-up questions, stress-test ideas, and push back on weak answers. Most people quit when AI gives a bad answer. You iterate.
Human skill: Self-management. Frustration tolerance and persistence when AI underperforms.
Level 4: Commander (Context Engineer)
You manage the conversation itself. You know when to start fresh, how to carry forward what matters, and why a clean session with good context beats a long session with a full memory.
Human skill: Systems awareness. You see the conversation as a system with constraints and limits.
Level 5: Captain (Design Thinker)
You design AI experiences for others. You think about what data AI needs, how workflows should be structured, and how to scope access responsibly. You direct what gets built, even if you are not writing the code.
Human skill: Design thinking. You work backward from the outcome and design the system to produce it.
Level 6: Admiral (Systems Integrator)
You document your best AI processes into reusable workflows. Your results are consistent because the system is consistent. You build infrastructure that compounds.
Human skill: Stakeholder navigation. Building AI systems for organizations requires trust and buy-in.
Level 7: Mission Director (AI Orchestrator)
You chain workflows into pipelines that run with minimal human intervention. You design feedback loops. You change how organizations work. The job of the future is yours because you are the most human, not the most technical.
Human skill: Inspirational leadership. Culture change and psychological safety at scale.
Harrison Painter
Harrison Painter
AI Business Strategist | Founder, LaunchReady.ai & AI Law Tracker
Harrison helps businesses build AI systems that cut costs and grow revenue. He has built three production AI products as a non-developer using Claude Code.
linkedin.com/in/harrisonpainter

Ready to Build Something Like This?

Find your AI proficiency level, then let's talk about what you can build.