The Admiral
You build AI infrastructure at organizational scale. The challenge is not the technology. It is navigating the humans.
Last updated: March 21, 2026
What Defines an Admiral
You have stopped starting from scratch. Every time you sit down with AI, you are not inventing a new approach. You are pulling from a library of workflows you have already built, tested, and refined. Your prompts are documented. Your processes have clear steps, defined inputs, and measurable success criteria. Your results are consistent because the system is consistent.
This is the leap from individual excellence to organizational impact. At Level 5, you design AI experiences. At Level 6, you build infrastructure that compounds. You are creating reusable workflows that other people can follow without needing your expertise. You are turning your best thinking into repeatable systems.
But here is the part that surprises most people: the hard part is not the workflows. The hard part is the humans.
You can build the most elegant AI system in the world, and it will die on arrival if the people around you do not trust it, do not understand it, or feel threatened by it. The Admiral's real skill is not systems design. It is stakeholder navigation. It is reading a room, understanding resistance, building coalitions, and earning buy-in from people who have every reason to be skeptical.
Every organization is a political system. The Admiral understands that AI transformation is not a technology project. It is a people project with technology involved.
The Science of Stakeholder Navigation
John Kotter published his 8-Step Change Management Model in 1996, and it remains one of the most cited frameworks in organizational change. The model outlines a sequence: create urgency, build a guiding coalition, form a strategic vision, enlist a volunteer army, enable action by removing barriers, generate short-term wins, sustain acceleration, and institute change. The order matters. Skip a step and the whole initiative stalls.
What makes Kotter's model particularly relevant for AI transformation is the emphasis on coalition-building before action. Most AI projects fail not because the technology does not work, but because leadership was misaligned, staff was untrained, and governance was weak or nonexistent. The technology was fine. The organization was not ready.
Research published through SSRN confirms this pattern. AI transformation failures correlate strongly with three organizational gaps: misaligned leadership expectations, insufficient workforce preparation, and absent governance structures. When these gaps exist, resistance manifests predictably. Decreased morale among teams who feel bypassed. Reduced productivity as people slow-walk adoption. Active opposition from individuals who see AI as a threat to their role or authority.
The Admiral recognizes these patterns early. Not because they have a degree in organizational psychology, but because they have learned to pay attention to how people respond when you say the words "we are implementing AI." The flinch. The forced smile. The question that is really a protest disguised as curiosity. These signals tell you more about whether your project will succeed than any technical specification ever could.
Kotter's first step, creating urgency, is often misunderstood as creating fear. That is the opposite of what works. Urgency means making the case clearly: here is the opportunity, here is the cost of inaction, and here is why this matters now. For AI, that case is increasingly easy to make with data. The harder part is step two: building the coalition of people who will champion the change alongside you.
Why AI Projects Fail
The numbers are stark, and they have not improved much despite billions of dollars in investment.
Over 80% of AI projects fail. That is roughly twice the failure rate of non-AI software projects. Only 48% of AI initiatives make it from pilot to production. In 2025, 42% of companies abandoned most of their AI initiatives entirely. These are not small experiments that quietly fizzled. These are funded projects with executive sponsors that consumed real resources and delivered nothing.
The instinct is to blame the technology. The data says otherwise.
Prosci conducted research across 1,107 professionals and found that 63% of the challenges in AI adoption are human factors, not technical ones. Read that again. Nearly two-thirds of the barriers to AI success have nothing to do with algorithms, data quality, or infrastructure. They are about people. Unclear communication. Insufficient training. Fear of job displacement. Lack of trust in the outputs. Turf wars over who controls the new tools.
Gallup found that only 15% of employees say their company has communicated a clear AI strategy. That means 85% of the workforce is watching AI get implemented around them without understanding why, how, or what it means for their role. That is not a technology problem. That is a leadership failure.
McKinsey's research adds a critical data point: organizations that invest in cultural readiness for AI see 5.3x higher success rates compared to those that focus on technology alone. Five times. The difference between a successful AI initiative and a failed one is not better models or more data. It is whether the organization prepared its people.
Gartner's 2025 analysis reinforced this pattern. The firms that succeeded with AI treated it as an organizational change initiative, not a technology deployment. They invested in training, communication, governance, and stakeholder alignment before they invested in tools.
This is exactly why stakeholder navigation is the defining human skill at Level 6. You can build the workflow. The question is whether anyone will use it.
Systems Thinking: The Fifth Discipline
Peter Senge published The Fifth Discipline in 1990 while at MIT. Harvard Business Review named it one of the seminal management books of the past 75 years. The book introduced five disciplines that define a "learning organization," and the fifth, systems thinking, is the one that ties everything together.
The five disciplines are: personal mastery (the commitment to lifelong learning and self-improvement), mental models (the deeply held assumptions that shape how we see the world), shared vision (a collective picture of the future that fosters genuine commitment), team learning (the practice of thinking together and surfacing collective intelligence), and systems thinking (seeing the whole rather than just the parts).
Systems thinking is the cornerstone because it changes how you see problems. Instead of looking for linear cause-and-effect chains, you look for interrelationships. Instead of asking "what caused this," you ask "what dynamics produced this pattern." Instead of fixing symptoms, you identify leverage points where a small change creates disproportionate impact.
This maps directly to AI adoption. AI does not exist in isolation inside an organization. When you introduce an AI workflow into one department, it touches data governance, team dynamics, customer experience, compliance, hiring practices, and organizational culture simultaneously. A linear thinker introduces the tool and wonders why everything breaks. A systems thinker maps the connections first and introduces the tool at the point of highest leverage with the lowest resistance.
Senge's concept of mental models is equally important for the Admiral. Every stakeholder in your organization carries assumptions about AI that shape their behavior. "AI will replace me." "AI cannot be trusted with important decisions." "AI is just a fad." "AI is only for technical people." These mental models are invisible until you surface them, and they will sabotage your initiative until you address them directly.
The Admiral's job is to see the organization as a system, understand the mental models at play, build a shared vision for what AI can accomplish, and create the conditions for team learning. That is not a technology skillset. That is a leadership skillset informed by systems thinking.
The Stakeholder Challenge
Research published in ScienceDirect (2022) identifies six distinct stakeholder roles in AI projects. The one most often overlooked is the "passive stakeholder," someone who is affected by the AI system but has no power to influence its design or deployment. Think of the customer service representative whose scripts are now generated by AI, or the warehouse worker whose shifts are now scheduled by an algorithm. They experience the change but had no voice in shaping it.
Ignoring passive stakeholders is how organizations build resentment. These are the people who will quietly undermine adoption by finding workarounds, reverting to old processes, or simply doing the minimum required. They are not being difficult. They are responding rationally to being excluded from decisions that affect their daily work.
McKinsey's 2025 research found that 46% of AI adoption barriers are talent skill gaps. Nearly half of the workforce does not have the skills to use the AI tools being deployed. And here is the kicker: only 39% of employees who use AI at work received any training on how to use it. They were handed a tool and told to figure it out.
It gets worse. Only 25% of companies plan to offer AI training in the near future. Three quarters of organizations are deploying AI without a plan to teach their people how to use it effectively. This is the organizational equivalent of buying everyone a piano and expecting music.
The Admiral sees this gap and fills it. Not by becoming a trainer (though that might be part of it), but by designing the system so that training, communication, and support are built into the rollout. The workflow itself should be teachable. The documentation should be clear enough that someone can follow it without you standing over their shoulder. The success criteria should be measurable so people can see their own progress.
Each of the six stakeholder roles requires a different approach. Champions need to be empowered and given visibility. Neutral parties need evidence and quick wins. Resistant stakeholders need their real concerns addressed, not dismissed. Passive stakeholders need a voice and a reason to engage. The Admiral designs a strategy for each group, not a single announcement that treats everyone the same.
Building Reusable Workflows
AI-enabled workflows are projected to grow from 3% to 25% of enterprise processes. That is not a gradual shift. That is a fundamental restructuring of how organizations operate, and it is happening now.
The previous generation of automation was rigid. Robotic Process Automation (RPA) followed fixed rules: if this, then that. It worked for predictable, repetitive tasks. But the moment a process required judgment, context, or adaptation, RPA broke. You needed a human to handle the exceptions, which often meant the automation saved less time than it cost to maintain.
AI systems are different. They learn. They adapt. They handle ambiguity. But that flexibility introduces a new challenge: governance. When a rigid system makes a mistake, you can trace it to a specific rule. When an AI system makes a mistake, the cause might be the prompt, the context, the training data, or an interaction between all three. Reusable workflows need guardrails that account for this variability.
The Admiral builds workflows with four components. First, clear inputs: what information does this workflow need, where does it come from, and who is responsible for providing it. Second, defined steps: what happens in what order, and where are the decision points that require human judgment. Third, success criteria: how do you know this workflow produced a good result, and who evaluates it. Fourth, feedback loops: how does the workflow improve over time based on what works and what does not.
This is the difference between using AI and building with AI. A Level 5 Captain designs a great AI experience for a specific situation. A Level 6 Admiral packages that experience into a system that works without them. The Captain is the pilot. The Admiral builds the fleet.
The shift from rigid RPA to adaptive AI systems means the Admiral's job is not just making AI work for individuals. It is making AI work for the organization. That requires thinking about scale, consistency, governance, and most importantly, the people who will use these systems every day.
Practical Exercise: The Stakeholder Map
Step 1. List every person affected by your AI initiative. Not just the people using the tool. Everyone. The team whose data feeds it. The manager who approves the budget. The customer who receives the output. The IT team who supports it. The executive who signed off. Go wide.
Step 2. Categorize each person: champion, neutral, resistant, or passive. Be honest. Wishful thinking here will cost you later.
Step 3. For each resistant stakeholder, identify their real concern. Not the surface objection. The real one. Fear of replacement. Fear of failure. Loss of control. Loss of status. Concern about data privacy. Worry that they will look incompetent. These fears are legitimate. Treat them that way.
Step 4. Design one specific action to address each concern. Not a generic "we will provide training." A specific action: "I will sit with Maria for 30 minutes on Tuesday and walk through the workflow together." Specificity builds trust. Generality signals that you have not thought it through.
Step 5. Identify your quick win. What can you show in 30 days that demonstrates value without requiring full adoption? The quick win is your proof of concept for the humans, not just for the technology. Pick something visible, measurable, and relevant to the stakeholders who matter most.
This exercise takes 30 minutes. It will save you months. Most AI initiatives die not because the technology failed, but because nobody mapped the humans.
Sources
- Kotter, J.P. (1996). Leading Change. Harvard Business School Press.
- Senge, P.M. (1990). The Fifth Discipline: The Art and Practice of the Learning Organization. Doubleday/Currency.
- Prosci. (2023). Best Practices in Change Management. Research across 1,107 professionals on organizational change barriers.
- Gallup. (2024). Employee perspectives on AI strategy communication in the workplace.
- McKinsey & Company. (2025). The State of AI: Global Survey. Cultural investment and AI success rates.
- Gartner. (2025). AI project failure rates and organizational readiness analysis.
- ScienceDirect. (2022). Stakeholder roles and classifications in AI project management.
- SSRN. AI transformation failures: leadership alignment, workforce preparation, and governance structures.
Frequently Asked Questions
Why do most AI projects fail?
Over 80% of AI projects fail, which is roughly twice the failure rate of non-AI software projects. The primary reasons are not technical. According to Prosci research across 1,107 professionals, 63% of the challenges in AI adoption are human factors: misaligned leadership, untrained staff, unclear governance, and resistance to change. McKinsey research shows that organizations investing in cultural readiness see 5.3x higher success rates with AI initiatives.
What is stakeholder navigation in AI adoption?
Stakeholder navigation is the human skill of identifying, understanding, and aligning the people affected by an AI initiative. It involves mapping six stakeholder roles (champions, neutral parties, resistant individuals, and passive stakeholders), understanding each group's real concerns, and designing specific actions to build trust and buy-in. Without stakeholder navigation, even technically excellent AI systems get rejected by the organization.
What is systems thinking and why does it matter for AI?
Systems thinking is a discipline developed by Peter Senge at MIT, published in his 1990 book The Fifth Discipline. It focuses on seeing interrelationships rather than linear cause-and-effect chains. For AI adoption, systems thinking matters because AI does not exist in isolation. It touches workflows, team dynamics, data governance, customer experience, and organizational culture simultaneously. An Admiral uses systems thinking to understand how changing one part of the system affects everything else.
How do I build reusable AI workflows for my organization?
Start by documenting your best AI processes with clear steps, defined inputs, and measurable success criteria. Then identify the stakeholders affected by each workflow, map their concerns, and design communication and training plans. Focus on quick wins you can demonstrate within 30 days. Enterprise AI-enabled workflows are projected to grow from 3% to 25% of processes, so the organizations that build reusable systems now will compound their advantage over time.
What's Your AI Level?
Take the assessment to find out exactly where you are in the 7 Levels. Then we'll show you what to work on next.