The Captain
You are no longer using AI for yourself. You are designing AI experiences for others. The shift from user to designer is the defining leap.
Last updated: March 21, 2026
What Defines a Captain
At Level 5, you stop building AI workflows for yourself. You start building them for other people.
That sounds like a small shift. It is not. It changes everything about how you think, what you prioritize, and what "good" looks like.
At Levels 1 through 4, you were the user. You learned to prompt effectively, think critically, manage context, and engineer the information AI needs to perform. All of that skill development was pointed inward. You were getting better results for yourself.
The Captain flips that entirely. Now you are thinking about what data AI needs to serve someone else. How workflows should be structured so that a person who is not you can follow them. How to scope access so that AI does exactly what it should and nothing it should not. You are directing what gets built, even if you are not writing a single line of code yourself.
The defining leap of Level 5 is the shift from "I use AI" to "I design AI for others." That is a fundamentally different job. It requires a fundamentally different skill set. And the skill it requires most is one that no AI model can replicate on its own: design thinking.
You are no longer optimizing your own productivity. You are taking responsibility for someone else's experience. That is what makes this level harder than everything that came before it. And it is what makes the people who reach it genuinely valuable to their organizations.
The Science of Design Thinking
Design thinking is not a vague concept. It is a structured methodology with decades of research behind it, most notably from the Stanford d.school (Hasso Plattner Institute of Design at Stanford University).
The process has five stages:
- Empathize. Understand the people you are designing for through observation, engagement, and immersion. Do not assume you know what they need. Go find out.
- Define. Synthesize what you learned into a clear problem statement. The quality of your solution depends entirely on the quality of your problem definition.
- Ideate. Generate a wide range of possible solutions. Resist the urge to lock in on your first idea. Breadth matters here.
- Prototype. Build quick, low-cost versions of your ideas. The goal is not perfection. The goal is something tangible enough to test.
- Test. Put prototypes in front of real users. Watch what happens. Listen to what they say. Pay closer attention to what they do.
These five stages rest on three foundational pillars: empathy, experimentation, and iteration. You cannot skip any of them. You cannot replace any of them with technology. They are human skills applied through a disciplined process.
Tim Brown, CEO and president of IDEO, defined design thinking as "a human-centered approach to innovation that integrates the needs of people, the possibilities of technology, and the requirements for business success." That definition appeared in his 2008 Harvard Business Review article "Design Thinking" and was expanded in his 2009 book Change by Design. It remains the most cited definition in the field because it captures exactly what design thinking is: a bridge between people, technology, and business.
Notice what comes first in that definition. Not technology. Not business. People. That ordering is intentional, and it is the core principle that separates a Captain from every level below.
Design Thinking Applied to AI
In 2024, a systematic literature review published in ScienceDirect examined how AI intersects with the design thinking process across all five stages. The findings were specific and practical.
At the empathy stage, AI amplifies the designer's ability to process emotional and behavioral data. Sentiment analysis, user behavior tracking, and pattern recognition across large data sets allow designers to understand user needs at a scale that manual research cannot match. But the researchers were clear: AI processes the data. The human interprets what it means.
At the ideation stage, machine learning algorithms can guide idea selection by predicting which concepts are most likely to succeed based on historical patterns. This does not replace creative thinking. It supplements it by narrowing the field so designers can focus their energy on the most promising directions.
At the prototype and test stages, AI analytics can monitor how users interact with prototypes in real time. Heatmaps, click tracking, session recordings, and A/B testing give designers immediate feedback on what is working and what is not. The iteration cycle accelerates because the data arrives faster.
The fundamental principle holds across every stage: AI makes each step faster and more data-rich, but it does not replace the human judgment that ties the stages together. You still need a person who can look at empathy data and define the right problem. You still need a person who can evaluate competing ideas and choose the one that best serves real users. You still need a person who can watch someone struggle with a prototype and understand why.
That person, at Level 5, is you.
Empathy as a System Design Skill
Most people think of empathy as a soft skill. Something nice to have. A personality trait. In AI system design, empathy is an engineering requirement.
A 2023 research paper published on arXiv, "Toward Artificial Empathy for Human-Centered Design," proposed a framework for integrating empathy into the AI design process. The researchers argued that designers must develop genuine empathy with the people they serve in order to build systems that truly address user needs. Not assumed needs. Not projected needs. Actual needs, discovered through direct engagement.
This matters because the most common failure mode in AI system design is building what the designer thinks is useful rather than what the user actually needs. It is the same mistake that has plagued software development for decades, but AI makes it worse because the systems feel smart. When an AI tool produces fluent, confident output, it is easy to assume that the output is also correct and useful. Empathy is the check against that assumption.
User-centric AI design, as outlined by Stanford HAI (Human-Centered Artificial Intelligence) and the Partnership on AI, means creating products that reflect users' needs, not the builder's technical preferences. That requires talking to real users before you build, during the build, and after launch. It requires accepting that your first design will be wrong in ways you did not predict. It requires the humility to redesign based on what you learn rather than defending what you already built.
The frameworks from Stanford HAI and Partnership on AI also emphasize responsible scoping. When you design AI for others, you are making decisions about what information AI can access, what actions it can take, and what guardrails protect the user. Those decisions require empathy because they require you to imagine how the system could be misused, how it could confuse someone, and how it could produce harm that you, as the builder, would never experience yourself.
Empathy at Level 5 is not a feeling. It is a practice. It is the discipline of consistently asking: what does this person need, what could go wrong for them, and how do I design around both of those realities?
The Shift: Building for Others
Here is why this level matters more than the four that came before it.
When you use AI for yourself, bad output wastes your time. You notice the error, fix it, and move on. The cost is measured in minutes.
When you design AI for others, bad output wastes their trust. They may not notice the error. They may act on it. They may lose confidence in the tool, in the process, or in the person who built it. The cost is measured in credibility.
That asymmetry changes everything about how you work. At Levels 1 through 4, you could iterate in real time because you were both the builder and the user. At Level 5, you have to anticipate problems before someone else encounters them. You have to test with real people, not just with your own assumptions. You have to build in safeguards for situations you personally would never trigger.
The question shifts. You stop asking "what can AI do?" and start asking "what do the people I serve need?" That is not a semantic difference. It is a complete reorientation of your design process. "What can AI do?" leads you to build features. "What do people need?" leads you to solve problems.
That empathy gap is what separates a power user from a designer. A power user optimizes their own workflow. A designer optimizes someone else's outcome. Both are valuable. But only one of them scales. Only one of them creates value for an organization beyond the individual. And only one of them earns the title of Captain.
The professionals who make this leap become the people their organizations rely on to answer a critical question: we have AI tools, now what do we actually do with them? The Captain is the person who answers that question, not with a feature list, but with a designed experience that works for the people who will use it every day.
Practical Exercise: The User Journey Map
Exercise: Redesign an AI Workflow for Someone Else
This exercise takes 60 to 90 minutes. Do not skip the interview step. The entire point is to discover the gap between what you assumed and what someone else actually needs.
- Pick one AI workflow you have built for yourself. It could be a prompt template, a research process, a content workflow, or an automation. Choose something that works well for you.
- Identify three people who would benefit from it. These should be real people you know, not hypothetical personas. Think about colleagues, clients, or team members who face the same problem your workflow solves.
- Interview one of them. Ask these questions directly: What do they actually need? What would confuse them about your current workflow? What terminology would they not understand? What steps would they skip? What feature would they never use? Listen more than you talk.
- Redesign the workflow based on their answers, not your assumptions. Document every change you make and why. Pay attention to the places where your original design reflected your expertise rather than their needs.
- Test it with them. Watch them use the redesigned workflow. Do not guide them. Do not explain things verbally that are not in the design. Document what changed, what still confused them, and what worked better than expected.
The most important output is not the redesigned workflow. It is the list of things you got wrong in your first design. That list is your empathy gap, and closing it is the work of Level 5.
How This Shows Up at Work
Level 5 is not theoretical. It shows up in specific, practical ways inside organizations every day.
Designing AI assistants for teams. You build a custom AI assistant for your sales team, your support team, or your operations team. You do not just hand them a ChatGPT login. You define the system prompt, the knowledge base, the response format, and the guardrails. You test it with three team members before rolling it out. You iterate based on their feedback, not your instinct.
Creating prompt templates others can use. You take a prompt that works brilliantly for you and turn it into something a colleague with no prompt engineering experience can use. That means stripping out jargon, adding clear instructions, building in examples, and testing it with someone who does not know what "temperature" or "system prompt" means.
Building intake processes that feed AI the right context. You design a form, a checklist, or a structured input that collects exactly the information AI needs to do its job. The person filling it out does not need to understand why each field matters. They just need to answer clearly and trust that the system will do the rest. That trust is your responsibility.
Scoping what AI should and should not do for specific roles. You define the boundaries. This AI assistant can draft emails but cannot send them. It can summarize documents but cannot access financial data. It can suggest responses but cannot interact with customers directly. Those scoping decisions require understanding both the technology and the people who will use it. Get it wrong and you either over-restrict (making the tool useless) or under-restrict (creating risk).
Every one of these examples requires the same core skill: the ability to think about someone else's experience before, during, and after you build. That is design thinking applied to AI. That is Level 5.
Sources
- Stanford d.school. "An Introduction to Design Thinking: Process Guide." Hasso Plattner Institute of Design at Stanford. dschool.stanford.edu
- Brown, Tim. "Design Thinking." Harvard Business Review, June 2008. hbr.org
- Brown, Tim. Change by Design: How Design Thinking Transforms Organizations and Inspires Innovation. HarperBusiness, 2009.
- ScienceDirect. "A Systematic Literature Review on the Integration of AI and Design Thinking." 2024. sciencedirect.com
- "Toward Artificial Empathy for Human-Centered Design." arXiv, 2023. arxiv.org
- Stanford HAI (Human-Centered Artificial Intelligence). Stanford University. hai.stanford.edu
- Partnership on AI. partnershiponai.org
Frequently Asked Questions
What is design thinking in AI?
Design thinking in AI means applying human-centered design principles to the way you build and deploy AI systems. Instead of asking what AI can do, you ask what the people you serve actually need. You empathize with users, define their real problems, ideate solutions, prototype quickly, and test with real people. The Stanford d.school framework provides the foundational methodology.
How does empathy improve AI systems?
Empathy forces you to design AI systems around real human needs rather than technical capabilities. Research from Stanford HAI and arXiv (2023) shows that user-centric AI design produces products that reflect users' actual needs, not the builder's technical preferences. When you understand how someone will actually use a system, you build better workflows, clearer interfaces, and more responsible access controls.
What is the difference between using AI and designing AI?
Using AI means you benefit from it personally. Designing AI means you create AI experiences that benefit others. When you use AI for yourself, bad output wastes your time. When you design AI for others, bad output wastes their trust. The shift from user to designer requires empathy, systems thinking, and the ability to work backward from someone else's outcome.
What is the Stanford d.school design thinking process?
The Stanford d.school design thinking process consists of five stages: Empathize (understand users through observation and engagement), Define (frame the right problem to solve), Ideate (generate a wide range of possible solutions), Prototype (build quick, low-cost versions to test), and Test (gather feedback and iterate). It is grounded in three pillars: empathy, experimentation, and iteration.
What's Your AI Level?
Take the assessment to find out exactly where you are in the 7 Levels. Then we'll show you what to work on next.