Level 2
Level 2 Challenge Coin

The Ensign

Prompt Engineer

You know how to give AI clear instructions. Your results are better because your inputs are better. Now it is time to understand why structure is the skill that separates useful output from noise.

Last updated: March 21, 2026

Rank: The Ensign
Human Skill: Structured Thinking
Focus: Clear Instructions
Framework: Bloom's Taxonomy

What Defines an Ensign

You have moved past the "just try it" phase. When you open ChatGPT or Claude, you do not type the first thing that comes to mind and hope for the best. You include context. You specify format. You add constraints. Your results are noticeably better than most people's because your inputs are noticeably better than most people's.

That is real progress. Most AI users never get here. They stay at Level 1 forever, typing search-engine-style queries and blaming the tool when the output is vague. You have already separated yourself from that crowd.

But here is the honest assessment: you are still treating AI like a vending machine. Put in a request, take out a result. If the result is good, great. If not, you try a different request. The interaction is transactional. One input, one output, done.

There is nothing wrong with that. It works. But it leaves an enormous amount of capability on the table. The Ensign gets consistently good first answers. What the Ensign does not yet do is push back, iterate, or use AI as a thinking partner. That is Level 3. For now, the work is mastering the input side of the equation, and the human skill that powers it is structured thinking.

Structured thinking is not about being smarter. It is about organizing what you already know before you hand it to the machine. The same way a well-organized brief produces a better legal argument, a well-organized prompt produces better AI output. The quality of your thinking determines the quality of the result.

The Science of Structured Thinking

Structured thinking has been formalized in professional practice for decades. The frameworks that make business communication effective are the same frameworks that make AI prompts effective. This is not a coincidence. Both contexts involve communicating complex intent to an audience that cannot read your mind.

The Minto Pyramid Principle. Barbara Minto developed this framework at McKinsey in the 1960s. The core rule: start with the conclusion, then support it with grouped arguments, then support each argument with data. Top-down communication. The reader (or the AI) gets the most important information first, which frames everything that follows. When you open a prompt with "I need a 500-word blog post for small business owners about cash flow management," you are using Minto whether you know it or not. You led with the conclusion: here is what I need. The alternative, burying your actual request inside three paragraphs of background, forces the AI to guess what matters.

BLUF: Bottom Line Up Front. The U.S. military formalized this principle because lives depend on clear communication. The most important information goes in the first sentence. Everything after that is supporting detail. BLUF works for AI prompts for the same reason it works for military communications: the recipient needs to know what you want before it can process why you want it.

SCQA: Situation, Complication, Question, Answer. This framework, also from Minto's work, structures a problem before presenting a solution. "Our team uses AI for customer emails (situation). Response times have not improved (complication). How should we restructure our prompts for faster drafting (question)?" That single framing gives the AI more actionable context than a paragraph of scattered background.

MECE: Mutually Exclusive, Collectively Exhaustive. Another McKinsey staple. When you break a problem into parts, those parts should not overlap and they should cover everything. This matters for prompts because vague, overlapping instructions produce vague, overlapping output. "List the advantages and disadvantages" is MECE. "Tell me the good stuff and also what you think" is not.

The core principle across all of these frameworks is the same: shift from more information to more structure. A 200-word prompt with clear structure will outperform a 500-word prompt that rambles. Every time. The AI does not need your stream of consciousness. It needs your organized thinking.

Why Structure Beats Length

There is a persistent myth in the AI community that longer prompts produce better results. More detail, more context, more instructions, better output. The research says otherwise.

Research Finding: Structured short prompts reduced API costs by 76% while maintaining the same output quality as longer, unstructured alternatives. (Aakash Gupta, 2025)

That number deserves attention. A 76% cost reduction with no quality loss means that three-quarters of the tokens in unstructured prompts were waste. Not helpful context. Not useful detail. Noise. The model had to process all of it, charge for all of it, and then ignore most of it to produce the same result it would have produced with a quarter of the input.

Braintrust's research on prompt formatting reinforced this finding from a different angle. Their testing found up to 76 accuracy points of difference across formatting changes alone. Same information, different structure, dramatically different results. The way you organize a prompt matters more than what you put in it.

Long prompts can actually hurt performance. When a prompt contains conflicting instructions, redundant context, or ambiguous phrasing, the model has to resolve those conflicts. Sometimes it resolves them correctly. Sometimes it picks the wrong instruction to follow. Sometimes it hedges and produces something vague that satisfies none of your constraints fully. The noise in an unstructured prompt does not just waste tokens. It introduces failure modes.

Lakera's analysis of prompt engineering practices found that users with clearer, more specific prompts consistently reported higher productivity and lower rates of misinterpretation. This aligns with a meta-analysis of over 1,500 papers on prompt engineering, which found that specificity consistently outperforms generality across tasks, models, and domains. Being specific is not about being long. It is about being precise.

The practical takeaway: if you find yourself writing longer and longer prompts to get better results, you are solving the wrong problem. The answer is not more words. It is better structure.

Bloom's Taxonomy and Prompt Quality

In 1956, Benjamin Bloom published a taxonomy of educational objectives that categorized cognitive complexity into six levels. Anderson and Krathwohl revised it in 2001, and the revised version remains one of the most cited frameworks in education. The six levels, from simplest to most complex: Remember, Understand, Apply, Analyze, Evaluate, Create.

This taxonomy maps directly to prompt quality, and understanding the mapping will change how you write every prompt going forward.

Remember. "What is machine learning?" This is a retrieval prompt. You are asking the AI to recall a definition. The output is generic because the input is generic. Any AI model can answer this, and every answer will sound the same.

Understand. "Explain machine learning in terms a restaurant owner would understand." Better. You have added an audience constraint, which forces the AI to translate rather than regurgitate. The output is more useful because the input demands interpretation.

Apply. "Show me how a restaurant owner could use machine learning to predict weekly ingredient orders." Now you are asking for a practical application in a specific context. The AI has to connect the concept to a real scenario.

Analyze. "Compare three machine learning approaches for restaurant inventory forecasting. Include the data requirements, accuracy tradeoffs, and implementation complexity of each." You are asking the AI to break down a problem into components and evaluate the relationships between them.

Evaluate. "Assess whether machine learning for inventory forecasting is worth the investment for a single-location restaurant doing $800K in annual revenue, given the implementation costs and the alternatives." Now you are asking for judgment. The AI has to weigh competing factors against specific constraints.

Create. "Design a 90-day pilot program for implementing ML-based inventory forecasting at a single-location restaurant. Include the data collection plan, tool selection criteria, success metrics, and a decision framework for whether to continue after the pilot." This is the highest cognitive level. You are asking the AI to synthesize multiple concepts into something new.

Here is what matters: AI struggles at the higher cognitive levels. The Online Learning Consortium's 2025 research on AI in education found that AI performs reliably at Remember and Understand, acceptably at Apply, and inconsistently at Analyze, Evaluate, and Create. The higher you go on Bloom's taxonomy, the more your own thinking has to carry the interaction.

This is why structured thinking is the human skill for Level 2. The quality of your prompt determines which cognitive level you are operating at. A vague prompt locks you into Level 1 of Bloom's taxonomy regardless of how powerful the model is. A structured, specific prompt with clear constraints pushes the interaction into Level 4, 5, or 6, where the AI's output becomes genuinely valuable and your judgment becomes genuinely necessary.

The Training Gap

Organizations are starting to recognize that prompt engineering is a real skill, not a gimmick. But the investment is not matching the need.

SQ Magazine reported in 2026 that 68% of firms now provide some form of prompt engineering training. That sounds encouraging until you look at the details. Most of this training is surface level: a lunch-and-learn, a shared PDF, a 30-minute webinar. Very few organizations treat prompt engineering as a skill that requires practice, feedback, and iteration, the same way they would treat public speaking or financial modeling.

McKinsey's research on AI adoption found that 75% of knowledge workers now use AI in some capacity. But most are using it without any formal training. They learned by trial and error, picking up habits (good and bad) through repetition rather than instruction. This means the majority of professionals are stuck at Level 1 or early Level 2 without knowing it. They are getting results, but they have no framework for understanding why some prompts work and others do not.

The job market reflects the shift. AI job postings requiring prompt fluency grew 7x in two years, according to Gallup's workforce research. Positions requiring generative AI skills quadrupled in the same period. Employers are not just looking for people who can use AI. They are looking for people who can use AI well. The distinction between "I use ChatGPT" and "I write structured prompts that produce consistent, high-quality output" is becoming a hiring criterion.

Yet only 28% of organizations plan to invest in AI upskilling for their existing workforce. That gap between demand and investment is an opportunity for individuals. If your organization is not training you, you can train yourself. The frameworks in this article are not proprietary. They are professional communication skills applied to a new medium. Every hour you spend improving your prompt structure compounds into better output, faster work, and a measurable edge over colleagues who are still guessing.

The GIGO Principle

Garbage In, Garbage Out. The phrase dates back to Charles Babbage, the father of the computer, in the 19th century. It is one of the oldest principles in computing, and it has never been more relevant than it is right now.

EBSCO Research Starters traces the concept through the full history of computing: from Babbage's Analytical Engine through early mainframes through modern software engineering. The principle has survived every technological revolution because it describes something fundamental about information processing. No system, no matter how sophisticated, can compensate for bad input.

This extends beyond computers to human cognition. Your brain builds on input quality. If you consume vague information, you form vague mental models. If you consume structured, specific information, you form structured, specific mental models. The same dynamic applies to AI. A vague prompt is garbage in. The model cannot compensate for missing context. It cannot infer your audience if you do not specify one. It cannot format the output correctly if you do not tell it what format you need. It cannot apply the right constraints if you do not define them.

The model will always produce something. That is what makes GIGO dangerous in the AI context. With traditional software, garbage in often produced an error message. The system failed visibly. With AI, garbage in produces a plausible-sounding response that might be completely wrong for your needs. You get output that looks professional, reads well, and misses the point entirely. There is no error message. There is no red flag. There is just a confident, articulate answer to a question you did not actually ask.

This is why structured thinking matters more than prompt tricks, templates, or "magic prompts" you find on social media. The input quality is not about the right words. It is about the right thinking. If you know what you want, who it is for, what format it should take, and what constraints apply, the prompt almost writes itself. If you do not know those things, no template will save you.

Practical Exercise: The Prompt Upgrade

This exercise takes 15 minutes and will immediately improve your AI results. Use your real prompt history, not made-up examples.

  1. Find your last 5 AI prompts. Open your ChatGPT or Claude history. Scroll back to the last five things you asked AI to do. Copy them somewhere you can review them.
  2. Score each prompt. For each one, answer three questions: Did you specify the audience? Did you specify the format? Did you include at least one constraint? Give yourself one point for each yes. A perfect score is 3 out of 3.
  3. Pick the weakest prompt and rewrite it. Take the prompt that scored lowest and rewrite it with four elements: who this is for, what you need, the format you want, and one constraint. For example, if your original prompt was "Write me a follow-up email," your rewrite might be: "Write a follow-up email to a prospective client who attended our webinar last week. The tone should be professional but warm. Keep it under 150 words. Include one specific reference to the webinar content."
  4. Run both versions and compare. Send the original prompt and the rewritten prompt to the same AI model. Compare the outputs side by side. Notice the difference in specificity, relevance, and usability.
  5. Save your best prompts. Start building a prompt library. Every time you write a prompt that produces excellent output, save it. Over time, you will build a personal reference that makes every future interaction faster and better.

What Comes Next

You get good first answers now. Your prompts are structured, specific, and consistently better than what most people produce. That is a real and valuable skill.

But you are still accepting the first answer. You are not pushing back when something feels off. You are not asking follow-up questions, stress-testing the reasoning, or challenging the AI to go deeper. The interaction ends when the AI responds.

Level 3 is the Lieutenant: the Critical Thinker. At that level, you treat AI as a thinking partner, not a vending machine. You iterate. You challenge. You use AI to pressure-test your own ideas, not just execute your instructions. The human skill shifts from structured thinking to self-management, specifically the frustration tolerance and persistence required to push through when AI underperforms and get to the answer that actually matters.

Continue to Level 3: The Lieutenant →

Sources

  • Minto, B. (1987). The Minto Pyramid Principle: Logic in Writing, Thinking, & Problem Solving. Minto International. Originally developed at McKinsey & Company in the 1960s.
  • Anderson, L. W., & Krathwohl, D. R. (Eds.). (2001). A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives. Longman.
  • Gupta, A. (2025). Structured prompting and API cost reduction in production LLM applications.
  • Braintrust. Prompt formatting and accuracy benchmarks across large language models. braintrustdata.com
  • Lakera. Prompt engineering best practices and productivity analysis. lakera.ai
  • Online Learning Consortium (2025). AI performance across cognitive complexity levels in educational contexts.
  • SQ Magazine (2026). State of prompt engineering training in enterprise organizations.
  • McKinsey & Company. AI adoption and knowledge worker usage survey.
  • Gallup. AI skills demand in the workforce: job posting analysis.
  • EBSCO Research Starters. Garbage In, Garbage Out: history and application of the GIGO principle.

Frequently Asked Questions

What is prompt engineering?

Prompt engineering is the practice of writing structured, specific instructions for AI systems to produce consistent, high-quality output. It involves specifying context, audience, format, and constraints rather than typing vague requests. In the 7 Levels of AI framework, prompt engineering is Level 2: The Ensign, and its core human skill is structured thinking.

Does prompt length matter for AI quality?

Prompt length alone does not determine AI output quality. Research shows that structured short prompts reduced API costs by 76% while maintaining the same quality as longer alternatives. Long prompts can actually hurt performance by introducing noise and conflicting information. What matters is specificity and structure, not word count.

What is structured thinking?

Structured thinking is the ability to organize your ideas logically before communicating them. Professional frameworks like the Minto Pyramid Principle (conclusion first), BLUF (Bottom Line Up Front), and MECE (Mutually Exclusive, Collectively Exhaustive) are all forms of structured thinking. These same frameworks that make business communication effective also make AI prompts effective.

How do I write better AI prompts?

Better AI prompts share four characteristics: they specify who the output is for (audience), what you need (task), the format you want (structure), and at least one constraint (boundaries). Start by reviewing your last five AI prompts and scoring each one against these criteria. Rewrite the weakest one, run both versions, and compare the output. The difference is immediate and measurable.

What's Your AI Level?

Take the assessment to find out exactly where you are in the 7 Levels. Then we'll show you what to work on next.