AI Readiness

The AI You Cannot Use Is Already Better Than the AI You Can

Anthropic built an AI that found security flaws hidden for 27 years. Then they decided not to release it. What the capability gap means for your AI strategy.

By Harrison Painter April 9, 2026 Updated April 9, 2026 5 min read

There is a version of AI you do not have access to. Not because you have not paid for the right subscription. Not because your company has not approved the tool. Because the company that built it decided it was too capable to let anyone use. On April 7, Anthropic announced Claude Mythos Preview. It scored 93.9% on SWE-bench Verified, the standard coding benchmark. The next closest model scored 80.8%. That is not a gap. That is a canyon.

What Mythos Found

Anthropic pointed Mythos at the world's most critical software. Operating systems. Web browsers. Video codecs. Infrastructure that billions of people depend on every day.

It found thousands of zero-day vulnerabilities. Flaws that no human had ever identified. Some had been sitting in production code for decades.

27 years

A remote crash vulnerability in OpenBSD, hiding in production code since 1999. Every security audit since then had missed it.

Source: Anthropic, Project Glasswing, 2026

In OpenBSD, a system built by some of the most security-conscious engineers on the planet, Mythos found a flaw that had been hiding for 27 years. A remote crash vulnerability in code written in 1999. Every security audit, every code review, every automated test since then had missed it.

In FFmpeg, the video library used by nearly every streaming service and video application, Mythos found a 16-year-old bug. Automated testing tools had run through that exact line of code 5 million times without catching it.

In FreeBSD, it found a 17-year-old flaw that grants unauthenticated root access. Full system control. No credentials needed.

Mythos found these without human guidance. It identified the vulnerabilities, chained them together, and in some cases wrote the exploit code autonomously.

What Anthropic Did Next

They decided not to release it.

Instead, they launched Project Glasswing. Twelve organizations now have access: Apple, Google, Microsoft, Amazon, NVIDIA, JPMorganChase, CrowdStrike, Cisco, Broadcom, the Linux Foundation, Palo Alto Networks, and Anthropic itself. Forty additional organizations that maintain critical infrastructure received gated access.

$100 million

in usage credits and $4 million in direct donations committed to open-source security.

Source: Anthropic, 2026

Anthropic committed $100 million in usage credits and $4 million in direct donations to open-source security. The rest of the world does not get to use it.

Why This Matters for Your AI Strategy

Here is what most professionals miss about this story. The AI tools you use today, including ChatGPT, Claude, and Gemini, are not the best AI that exists. They are the best AI that the companies building them believe is safe enough for public consumption.

Mythos is what lives behind the wall.

That gap between public AI and private AI is not theoretical. It is 13 percentage points on the most widely used coding benchmark. It is the difference between finding a bug that 5 million automated tests missed and not finding it. It is a capability advantage that 12 companies now have and your company does not.

For business leaders planning an AI strategy around the tools available today, this is a recalibration moment. The AI you can see is not the AI that exists. The question is no longer "is AI good enough to matter?" The question is how much of AI's actual capability is invisible to you.

The AI you can see is not the AI that exists. The question is how much of AI's actual capability is invisible to you.

What This Tells You About the Next 12 Months

Anthropic did not restrict Mythos because it failed. They restricted it because it succeeded too well. This is a new kind of corporate decision in the AI era: "Our product is too capable to sell."

When the companies building AI start gatekeeping their own products, the trajectory is steeper than most people are pricing in. Claude Opus 4.6 was the best publicly available model last month. Mythos is 13 points ahead. That improvement did not take years. It took months.

The models you will have access to in six months will be more capable than anything you can use today. The models you will not have access to will be further ahead still.

Three Lessons

1. Your AI Benchmark Is Wrong

If you are evaluating AI based on what ChatGPT or Claude can do today, you are benchmarking against the public tier. The private tier is already operating at a level that rewrites assumptions. Plan for where AI is going, not where the version you can access is today. This is Level 5: Design Thinker (Systems Integrator) thinking. Understanding the architecture of capability tiers and planning around what is coming, not just what is available.

2. The Capability Gap Is a Competitive Variable

Twelve organizations now have access to a model that found what 27 years of human expertise could not. If your competitor is on that list and you are not, the playing field is not level. This will happen again with future models. Access to frontier AI is becoming a strategic advantage, not just a productivity tool.

3. "Too Powerful to Release" Is a Signal, Not a Headline

When the builder says the product works too well to sell, pay attention. That means the next publicly available model will be closer to what Mythos can do. Prepare for that version now. Build the skills, the processes, and the judgment to use more powerful AI when it arrives.

Sources:

Anthropic, "Project Glasswing: Securing critical software for the AI era" (2026)
The Hacker News, "Anthropic's Claude Mythos Finds Thousands of Zero-Day Flaws" (2026)
NxCode, "Claude Mythos Preview: 93.9% SWE-bench" (2026)

Frequently Asked Questions

What is Claude Mythos and why was it not released publicly?

Claude Mythos Preview is Anthropic's most powerful AI model, scoring 93.9% on SWE-bench Verified (the next closest model scored 80.8%). Anthropic decided not to release it publicly because of its ability to autonomously find thousands of zero-day security vulnerabilities in critical software. Instead, they created Project Glasswing, giving access to 12 major organizations and 40 critical infrastructure maintainers to use the model for defensive security purposes.

What is the capability gap between public and private AI models?

The gap between public AI (what consumers and most businesses can access) and private AI (what the companies building these models keep internal or restrict) is currently 13 percentage points on the most widely used coding benchmark. Mythos found vulnerabilities that 27 years of human security audits and 5 million automated test runs had missed, demonstrating capabilities far beyond what publicly available models can do.

What does Project Glasswing mean for business AI strategy?

Project Glasswing signals that the AI tools available to most businesses are not the frontier of what exists. Twelve organizations including Apple, Google, Microsoft, and Amazon now have access to capabilities your organization does not. For business leaders, this means AI strategy should plan for where capability is heading, not where the publicly available tools are today. The models available in six months will be significantly more capable than anything accessible now.

Harrison Painter
Harrison Painter
AI Business Strategist. Founder, LaunchReady.ai and AI Law Tracker.

Harrison helps teams build AI systems that cut cost and grow revenue. Nearly 20 years of business experience. 2.8M YouTube views. Founder of LaunchReady.ai and the 7 Levels of AI framework. Author of You Have Already Been Replaced by AI.

Connect on LinkedIn

Find your AI Proficiency level

The free 7 Levels assessment places you across seven stages of AI capability. Under ten minutes, research-backed scoring.

Get the weekly briefing

LaunchReady Indiana delivers AI news, compliance updates, and case studies for Indiana leaders. Every Tuesday. Five minutes.

Subscribe free