From Framework to Live Product
in 40 Hours
Harrison Painter | LaunchReady.ai | March 2026
Download PDFLast updated: March 21, 2026
The Problem
Most professionals know AI matters. Almost none of them can tell you where they actually stand.
The existing options are not helpful. On one end, you get ten-question quizzes that tell you nothing. On the other, you get academic instruments designed for researchers, not business leaders. Nothing in between measures practical AI proficiency in business terms and tells you specifically what to learn next.
I needed an assessment that sorted people into a clear framework, gave them an identity they wanted to share, and pointed them toward the next step. It did not exist. So I built it.
The Decision
I had been developing the 7 Levels of AI Proficiency framework as the foundation for all of LaunchReady's training programs. The framework was solid. But frameworks only matter if people can place themselves in one.
I decided to build a full adaptive assessment -- not a toy quiz, but a real psychometric instrument with scenario-based questions, weighted scoring, and a progression model that stops when it finds your ceiling.
The goal: someone takes the assessment, gets a result that feels accurate, sees a badge worth sharing, and lands in my email system tagged by level. One product that does lead generation, audience segmentation, and brand building simultaneously.
This case study references the 7 Levels of AI, a proficiency framework developed by LaunchReady.ai that maps how professionals progress from basic AI usage to full orchestration. Each level is defined by a human skill, not a technical one. The inline callouts throughout this document show which level a specific decision or action represents. A full reference is included at the bottom, or take the free assessment at assess.launchready.ai to find your level.
What I Built
Two production websites in 10 days.
1. assess.launchready.ai -- AI Proficiency Assessment
- Adaptive 7-level assessment (5 questions per level, stops at your ceiling)
- 35+ research-based scenario questions with weighted scoring
- Guttman + partial credit scoring model (server-side only)
- 7 photorealistic challenge coin badge designs
- Dynamic OG share card generation (1200x630 PNG per result)
- LinkedIn-optimized viral share copy
- Supabase database with migrations and session tracking
- API routes with rate limiting
- Turnstile CAPTCHA on email submission
- Kit integration that auto-tags subscribers by level
- Email drip sequences for post-assessment nurture
- localStorage persistence so you can resume mid-assessment
- Shareable results with personalized strengths and next steps
2. launchready.ai -- Full marketing website rebuild
- Programs, coaches, testimonials, pricing tiers
- Assessment showcase integrated across all pages
- Full SEO (meta tags, OG, JSON-LD, canonical URLs, sitemap, robots.txt)
- GA4 tracking
How It Happened
| Day | What Shipped |
|---|---|
| Mar 7 | LaunchReady.ai v2 -- full marketing site, program cards, coach bios, testimonials, pricing |
| Mar 8 | Polish -- name fixes, layout tweaks |
| Mar 10 | Assessment v1 -- Next.js scaffold, Supabase schema, results page, Turnstile, rate limiting |
| Mar 11 | Email gate + Kit integration, branding, assessment redesign, drip sequences |
| Mar 12 | OG share images, trajectory map, badge images, v3 questions, scoring fixes, SEO |
| Mar 13 | localStorage persistence, audit cleanup |
| Mar 16 | Escalator v2 -- adaptive 7-level system, challenge coins, share card redesign, results page overhaul, LinkedIn viral copy, home page rewrite |
V2 was built on top of V1's infrastructure in about 3-4 hours. Same database. Same email pipeline. Same deployment. That is the compound effect of building systems, not just features.
What 40 Hours Actually Looks Like
People hear "AI built it" and assume I typed a prompt and walked away. Here is what I actually spent 40 hours doing.
Assessment design and psychometric research (~10 hours). Designing the 7-level framework. Writing scenario-based questions that test real business judgment, not trivia. Building the Guttman + partial credit scoring model. Calibrating difficulty across levels. AI drafted questions, but I evaluated every single one against the framework, rewrote weak scenarios, adjusted answer weights, and killed anything that felt like a quiz instead of a real assessment. This is I/O psychology work. The human judgment is the product.
Product design and UX (~8 hours). How should the assessment flow? When do you show results? Where does sharing go? What creates the emotional high that drives virality? These decisions came from studying what makes assessments like MBTI, CliftonStrengths, and Spotify Wrapped spread. I mapped the emotional arc of the experience before writing a line of code.
Visual design and brand (~6 hours). Seven challenge coin designs that feel earned, not given. OG share cards optimized for LinkedIn's feed layout. Brand consistency across two sites. Every visual was a judgment call about identity and perceived value. Military coins are circular -- LinkedIn-native. Dark navy pops against white feeds. These are not random aesthetic choices.
Content strategy and copywriting (~5 hours). Landing page copy. Results page personalization for each of the 7 levels. LinkedIn share text engineered for the "there are 7 levels" pattern. Email drip sequences. Writing for conversion while sounding like a real person talking to another real person.
Technical architecture and iteration (~6 hours). Database schema decisions. API design. Email integration. Security (rate limiting, Turnstile, server-side scoring). Debugging edge cases. The technical work was real, but it moved fast because I was making architecture decisions and Claude was writing the implementation.
Quality assurance and competitive analysis (~5 hours). I benchmarked every surface against 16Personalities, CliftonStrengths, and Spotify Wrapped. Scored the landing page, results page, and OG share card on structured rubrics. Found gaps. Closed them. Final scores: Landing 78/100, Results 80/100, OG Card 82/100. The biggest remaining gap is social proof -- no testimonials or taker count yet.
What This Would Cost Traditionally
| Approach | Cost | Timeline |
|---|---|---|
| Freelancers/specialists (5-6 people) | $10,000 - $25,000 | 3-4 months |
| Agency | $15,000 - $35,000 | 4-6 months |
| Harrison + Claude | ~$133/month + 40 hours | 10 days |
The freelancer estimate covers an I/O psychologist for assessment design, a frontend developer for two sites, a graphic designer for the coins and share cards, a copywriter, and an SEO specialist. The agency estimate adds project management overhead and margin. Both are based on published 2025-2026 rate data from Salary.com, Clutch.co, and industry benchmarks.
The Decisions That Shaped It
Adaptive vs. fixed assessment.
V1 had 27 fixed questions. Everyone answered everything. V2 adapts -- 5 questions per level, stops when you hit your ceiling. It respects the user's time, produces a more accurate result, and makes the conversation more interesting. "What level did you get?" is a better question than "what was your score?"
Challenge coins vs. generic badges.
Military challenge coins feel earned. They are circular, which is native to LinkedIn profile frames and social feeds. They have a collectibility factor. Dark metallic on navy blue pops in a white feed. Generic badges feel like participation trophies.
Share section at position number two.
Most assessment sites bury sharing at the bottom. But the emotional high is right after seeing your result. That is when people share. I put the share section immediately after the hero badge -- before strengths, before next steps, before anything else.
Strengths spotlight vs. radar chart.
Radar charts look analytical, but they do not drive sharing. Research on viral assessments shows identity reinforcement ("here is who you are") drives sharing far more than data visualization. Strengths do that. Charts do not.
"There are 7 levels" opening hook.
Every LinkedIn share opens with the same line. When multiple people share, it creates a recognizable pattern in feeds -- the same mechanic that made "I'm a [type]" go viral for MBTI.
What This Means for You
This is not a story about AI replacing professionals. An I/O psychologist with 20 years of experience would build a better assessment than mine. A senior developer would write cleaner code. A brand designer would create more polished visuals.
But I did not need perfect. I needed real -- a working product, live in the world, collecting users and building my email list while I figure out what to improve.
The 40 hours I spent were not spent prompting. They were spent thinking. Designing. Evaluating. Deciding what to build, how it should feel, and what "good enough to ship" actually looks like. AI handled the implementation. I handled the judgment.
That is Level 6. The human is not in the loop.
The human IS the loop.
Frequently Asked Questions
What is an AI proficiency assessment?
An assessment that measures practical AI skills across a structured proficiency framework. The LaunchReady AI Proficiency Assessment uses scenario-based questions, adaptive difficulty, and a 7-level model to determine where you stand and what to learn next.
How much does it cost to build an assessment product?
Traditional cost ranges from $10,000 to $35,000 using freelancers or an agency, requiring 3-6 months. Harrison built the LaunchReady AI Proficiency Assessment for approximately $133 per month in infrastructure costs, completing the full product in 40 hours across 10 days.
Can you build a psychometric assessment with AI?
Yes, with human expertise directing the design. AI can draft questions and generate code, but the framework design, scoring model calibration, scenario quality evaluation, and competitive benchmarking all require human judgment. Harrison evaluated every question against the 7 Levels framework and rewrote weak scenarios.
What are the 7 Levels of AI?
A proficiency framework by LaunchReady.ai that maps how professionals progress from basic AI usage to full orchestration. The seven levels are: Cadet (AI Aware), Ensign (Prompt Engineer), Lieutenant (Critical Thinker), Commander (Context Engineer), Captain (Design Thinker), Admiral (Systems Integrator), and Mission Director (AI Orchestrator). Each level is defined by a human skill, not a technical one.
The 7 Levels of AI
A proficiency framework that maps how professionals progress from basic AI usage to full orchestration. Each level is defined by a human skill, not a technical one.
Ready to Build Something Like This?
Find your AI proficiency level, then let's talk about what you can build.