From idea to live product
in 3.5 hours.

6,500+Lines of code
3.5 hrsBuild time
~$67Cost
16Setup steps

The Problem

Non-technical professionals want to use Claude Code. They hear about it, they get excited, they try to set it up. Then they hit a wall.

Download Homebrew. Install Node. Configure a terminal. Set an API key. Every step assumes you already know what you are doing. Most people do not. They get stuck at step two, close the tab, and never come back.

There was no guided path. No one had built an onboarding experience that treated these users like the smart, capable professionals they are, just ones who have never opened a terminal before. I decided to build one myself.

The Decision

I could have written a blog post. A PDF guide. A YouTube walkthrough. All of those would have been easier. I had watched real people try to follow written instructions and fail. The gap was not information. The gap was experience design.

I needed an interactive wizard that held someone's hand through every single step, showed them their progress in real time, and never left them wondering what to do next.

Level 5 · Captain (Design Thinker)

This case study references the 7 Levels of AI Proficiency, the framework developed by LaunchReady.ai that maps how professionals progress from basic AI use to full orchestration. Inline tags throughout show which level a specific decision represents. Take the free assessment at assess.launchready.ai to find your own level.

What I Built

An interactive 16-step setup wizard that walks non-technical professionals through building a personalized AI assistant with Claude Code. Live at launchready.ai/build.

  • 6,500+ lines of production code. Multi-step form with progressive disclosure.
  • Live preview panel that builds a personalized CLAUDE.md file in real time as the user answers questions.
  • Two-phase flow: brain file builder, then tool installation.
  • Platform toggle (Mac or Windows) with conditional instructions.
  • Save and resume via localStorage. Close the tab, come back later, pick up where you left off.
  • Mobile detection with a graceful boundary. Phase 1 works on mobile. Phase 2 requires desktop.
  • Copy-to-clipboard for every terminal command.
  • Name suggestion buttons for the AI assistant, plus encouragement messaging at every step.
  • Responsive design, accessibility (ARIA, keyboard navigation, screen reader support), and SEO including HowTo structured data.
  • A Cloudflare Worker serverless backend with Turnstile bot protection, Kit API integration, rate limiting, and CORS protection.
Level 5 · Captain (Design Thinker)

How It Happened

TimeWhat happened
9:00 PMSession start. Researched the book spec, fixed the memory system, built a session-close skill.
9:45 PMShifted focus. Studied 200+ onboarding flow analyses, configurator UX studies, and conversion optimization research. Wrote a detailed build plan.
10:15 PMSecurity audit on the plan itself. Score 65/100. Identified 21 vulnerabilities. Rewrote the plan. New score 87/100.
10:30 PMDispatched three parallel AI agents to build HTML, CSS, and JavaScript simultaneously.
10:45 PMIntegration review caught 30+ selector mismatches between files. Dispatched two more agents to reconcile CSS and JS against the HTML source of truth.
11:00 PMLive testing and iteration for a full hour. Fixed preview panel sizing, scroll behavior, step ordering, the platform toggle, copy text, and encouragement messaging.
12:00 AMDeployed the Cloudflare Worker. Configured Turnstile, Kit, and environment variables. Rate limiting, Turnstile enforcement, CORS restrictions.
12:15 AMKit tag, first drip email, legal disclaimer. Pushed to production.
12:30 AMLive.
Level 6 · Admiral (Systems Integrator)

What 3.5 Hours Actually Looks Like

AI wrote the code. The code was maybe 30% of what mattered. Here is the actual breakdown.

UX research and planning (~30 min)

I studied 200+ onboarding flow analyses before writing a single line of the plan. I needed to understand why setup wizards fail, what progressive disclosure actually means in practice, and how to design for someone who has never opened a terminal.

Level 5 · Captain (Design Thinker)

Security thinking (~15 min)

I audited my own plan before building anything. Found 21 issues. Rewrote the approach. This is the step most people skip. It is the step that matters most when you are collecting email addresses and running API integrations.

Level 3 · Lieutenant (Critical Thinker)

Design decisions and iteration (~60 min)

Live testing every step. Clicking through the wizard the way a non-technical user would. Finding the places where someone would get confused, stuck, or frustrated. Fixing them in real time. This was the longest phase and the most important one.

Content and copywriting (~20 min)

Sixteen steps of encouragement messaging. Every step needed a headline, instructions, and a reason to keep going. The tone had to be warm without being condescending.

Integration and deployment (~15 min)

Cloudflare Worker, Kit email integration, Turnstile bot protection, environment variables, CORS configuration. This was the fastest part because the architecture decisions were already made.

Quality judgment throughout

Deciding what "good" looks like for a non-technical user is not something AI can do. I was the quality bar. Every decision about what to simplify, what to cut, and what to explain more came from human judgment about a human audience.

What This Would Cost Traditionally

ApproachEstimated costTimeline
Mid-level freelancer ($75/hr, 60 to 80 hrs)$4,500 to $6,0002 to 3 weeks
US agency$8,000 to $15,0003 to 6 weeks
Harrison + Claude Code~$67 (3.5 hrs of subscription time)3.5 hours

Freelancer estimate based on 2026 rates from Arc.dev and ZipRecruiter. Agency estimate based on pricing from Apexure, Landingi, and Tapflare for custom interactive landing pages with API integrations. A 16-step interactive wizard with a live preview panel, serverless backend, bot protection, and email integration is not a simple landing page.

The Decisions That Shaped It

Name the assistant first

The very first step asks the user to name their AI assistant. Not configure settings. Not install tools. Name it. This creates emotional ownership before the user invests any effort. It turns "set up a developer tool" into "build your assistant."

Micro-steps for tool installation

In early testing, a real user struggled with a step that said "download all 5 tools." That single observation changed the architecture. Each tool became its own step with its own confirmation button. More steps, less confusion.

Never auto-advance while typing

Users were getting yanked to the next step mid-sentence. The form was advancing automatically after a field was filled. I changed every transition to a manual "Continue" button. Slower, respectful of the user's pace.

VS Code as home base, not Terminal

Terminal is only for initial setup. The test step and all future use happens inside VS Code. For a non-technical user, VS Code is familiar. Terminal is not. Meet people where they are.

Security before launch

During testing, I found that the email capture gate could be bypassed on a Turnstile failure. I fixed it so Turnstile is mandatory, failures show a retry prompt, and the Worker rate-limits by IP. This took 15 minutes. Skipping it would have been a real vulnerability.

Level 3 · Lieutenant (Critical Thinker)

What This Means for You

The speed gets the attention. The speed is not the point.

The research mattered. The security audit mattered. The hour of live testing and iteration mattered. The decision to name the assistant first mattered. Every one of those was a human call that AI could not have made on its own.

The AI did not know the audience. It did not feel the friction in the UX. It did not understand why a non-technical professional would quit at step three. I knew those things. And because I knew them, I could direct five AI agents in parallel to build something that would have taken a team of developers weeks.

Empathy, not engineering.
The audience, not the code.

Tools used Claude Code (Opus 4.6), VS Code, GitHub, Cloudflare Workers, Cloudflare Turnstile, Kit, GA4.

Frequently Asked

Can a non-technical person build a web app with AI?

Yes. Harrison Painter is not a developer. He built a production, security-audited web application with a serverless backend in 3.5 hours using Claude Code. AI handled the code. The human handled the judgment: user empathy, design decisions, security concerns, and business strategy.

How long does it take to build a web app with Claude Code?

This project took 3.5 hours from idea to live product. That included UX research, a security audit, parallel AI agent orchestration, live testing, deployment, and email integration. The speed comes from directing AI rather than writing code manually.

How much does it cost to build with AI versus hiring a developer?

Traditional cost for a comparable interactive web application ranges from $4,500 to $15,000 depending on whether you hire a freelancer or an agency. Harrison built this project for approximately $67 in Claude Code subscription time.

Do you need to know how to code to use Claude Code?

No. Harrison is not a developer. He directed Claude Code using design thinking, UX research, and business strategy. AI wrote the code. The human made every decision about what to build, who to build it for, and what quality bar to hold it to.

The 7 Levels of AI Proficiency

A measurable ladder from where your team is today to where the work needs them to be. Each level is defined by a human skill, not a technical one.

Level 1 Cadet AI Aware
You know AI exists and you have tried it. You type requests the way you would type into a search engine. The outputs feel hit-or-miss because they are.
Human skill: Self-awareness. Knowing what you do not know.
Level 2 Ensign Prompt Engineer
You give AI clear instructions with context, constraints, and format. Your results are better than most because your inputs are better. You are still treating AI like a vending machine.
Human skill: Structured thinking. You organize your thoughts before giving them to AI.
Level 3 Lieutenant Critical Thinker
You use AI as a thinking partner. You ask follow-up questions, stress-test ideas, and push back on weak answers. Most people quit when AI gives a bad answer. You iterate.
Human skill: Self-management. Frustration tolerance and persistence.
Level 4 Commander Context Engineer
You manage the conversation itself. You know when to start fresh, how to carry forward what matters, and why a clean session with good context beats a long session with a full memory.
Human skill: Social awareness. Reading the environment.
Level 5 Captain Design Thinker
You design AI experiences for others. You think about what data AI needs, how workflows should be structured, and how to scope access responsibly.
Human skill: Design thinking. You work backward from the outcome.
Level 6 Admiral Systems Integrator
You document your best AI processes into reusable workflows. Your results are consistent because the system is consistent. You build infrastructure that compounds.
Human skill: Stakeholder navigation. Building AI systems for organizations requires trust and buy-in.
Level 7 Mission Director AI Orchestrator
You chain workflows into pipelines that run with minimal human intervention. You change how organizations work. The job of the future is yours because you are the most human, not the most technical.
Human skill: Inspirational leadership. Culture change and psychological safety at scale.
Harrison Painter
Harrison Painter
Founder, LaunchReady.ai · Author, You Have Already Been Replaced by AI

Harrison built LaunchReady.ai to define the measurable standard for AI-capable teams. He is the author of You Have Already Been Replaced by AI (June 2, 2026), host of the AI Ready Podcast, and builder of the AI Law Tracker.

linkedin.com/in/harrisonpainter

Ready to build your own?

Build your personalized AI assistant in the free 16-step wizard, or book a discovery call about the 7 Levels Engagement.