AI Readiness ยท Pillar Read

The 80% Shortfall: What Companies Should Learn From Higher Ed

A study of 1,960 university staff says the institutions producing your workforce are showing the same shortfall your company is. The leading indicator is in front of you now.

By Harrison Painter May 4, 2026 Updated May 4, 2026 5 min read

Eighty percent of US higher-education institutions tell their staff to develop AI skills on their own. Thirteen percent measure ROI on AI investments, even though ninety-two percent have an AI strategy. Fifty-six percent of staff already use AI tools the institution does not provide. The institutions producing the next generation of workers show the same pattern most companies show. Your company should pay attention to what they are reporting.

The numbers inside the workforce-producing institution

EDUCAUSE published one of the most thorough institutional-AI studies of 2026 in January. The work was led by Jenay Robert, distributed across four higher-ed associations (EDUCAUSE, AIR, NACUBO, and CUPA-HR), and reached 1,960 respondents across 1,800-plus US universities. Respondents were weighted against the IPEDS population for representativeness, where IPEDS is the federal data instrument for higher education in the United States. Methodologically, this is academic-tier institutional research, not vendor research.

The findings are quiet. The educators are not panicking. They are reporting what is happening on their own campuses, and what is happening looks a lot like what is happening inside corporate America.

80% of US higher-education institutions tell staff to develop AI skills on their own; only 71% offer in-house structured training. Strategy without instruction. Source: EDUCAUSE 2026, Robert. N=1,960 across 1,800+ institutions.

The 80% number is the load-bearing one. The institutions whose entire economic output is workforce development have, in the majority case, abdicated their own internal AI training to self-study. Read that sentence twice. The institutions producing the workforce are telling the workforce to figure it out alone.

Strategy without measurement

Ninety-two percent of US higher-education institutions report having an AI strategy. Thirteen percent of them measure ROI on their AI investments. The strategy-execution split inside higher ed is 79 percentage points wide. Most strategies are not being measured against any outcome.

This pattern is not limited to universities. Stanford's HAI 2026 AI Index reports that AI agent deployment in production sits in the single digits across nearly all business functions. The strategy-without-measurement pattern is the dominant one across institutions and companies. Higher ed just measured itself first.

If you run a company with an AI strategy and no measurement layer underneath it, you are inside the 92% number. The institutions positioned to feed your hiring pipeline are inside the same number. Neither of you can hire your way out of this; neither of you is currently measuring your way out of it either.

The policy-awareness split

Ninety-four percent of higher-ed staff have used AI tools for work in the past six months. Fifty-four percent know their institution's AI policy. Forty percentage points separate use from awareness. Among executive leaders specifically, thirty-eight percent do not know their own institution's AI policy. The leadership tier is not closer to the rule book than the staff tier; in some cases, it is further.

Inside companies the same shape repeats. A documented AI policy without onboarding and training reads, in practice, like no policy. The policy is technically present. The behavior it was supposed to govern is unobserved.

Shadow tools at 56%

Fifty-six percent of higher-ed staff report, in the EDUCAUSE survey, using AI tools their institution does not provide. That is a self-reported, measured-behavior number. It is not a CEO suspicion. It is the staff telling researchers what they are doing.

Vendor surveys have published much higher figures (one frequently cited 94% number is a CEO-suspicion estimate, not a measured behavior). EDUCAUSE's 56% is the cleaner empirical anchor. More than half of the workforce inside institutions that produce the workforce is already operating on tools the institution did not authorize. Your company's number is unlikely to be much different.

The institution producing your next hire is itself in the same shortfall your company is in. The pipeline does not arrive AI-ready, because it cannot.

What operators should do with this

Three reads on the EDUCAUSE data, written for the operator who has to decide something this week.

One. Internal proficiency development is no longer optional. The leading indicator says external hiring will not solve this in the next 24 months. Universities are not going to backfill what 80% of them have already abdicated. If your team's AI proficiency is going to advance, you are the one developing it.

Two. Strategy without measurement is the dominant pattern, which means measurement is the differentiator. Operators who are willing to measure where their team currently sits, and re-measure ninety days later to confirm the change held, will be a small minority of the market. That minority will operate with information their peers do not have.

Three. Shadow AI is your team's leading indicator. The tools your staff are using without sanction are the tools they want and need. Half your knowledge workers are already on tools you did not pick. Treating that as a violation rather than as data is the wrong read.

Level 1 of the seven levels of AI proficiency is recognition. A team at Level 1 understands what AI is and what it is not, and can name the tools their colleagues are using. That is the floor, and it is the floor most companies are still verifying. Tomorrow's piece in this series describes what Level 1 looks like at the desk where the work actually happens. Related reading: Level 1: The Cadet.

Sources

Frequently asked questions

What does the EDUCAUSE 2026 study actually measure?

EDUCAUSE surveyed 1,960 staff, faculty, and executive leaders across 1,800-plus US higher-education institutions in fall 2025. Respondents were weighted against the IPEDS population for representativeness across sector, level, and size. The instrument covered AI strategy, training, policy, ROI measurement, perceived risks, and on-the-job AI use. Methodologically, it sits at the academic-research tier rather than the vendor-research tier.

Why does higher-education AI data apply to my company?

Universities are knowledge-economy institutions whose entire output is workforce development. If they cannot close their own internal AI proficiency shortfall, the assumption that new hires will arrive AI-ready does not hold. The pattern is also methodologically clean: the same strategy-without-measurement, policy-without-awareness, and training-by-self-study patterns appear in corporate AI surveys at similar prevalence rates.

What is shadow AI and why does the 56% number matter?

Shadow AI is the prevalence of staff using AI tools that the institution does not provide or sanction. EDUCAUSE measured this directly at 56% of higher-ed staff, by self-report. That figure is a measured-behavior number, not a CEO-suspected number. It tells leaders that more than half of knowledge workers are already using AI tools outside institutional governance. The same pattern shows up in corporate environments.

What does internal AI proficiency development actually look like?

Three components, in order. One: a baseline measurement of where your team is across the seven levels of AI proficiency, anchored in human EQ skills. Two: a 6-week structured intervention that moves the team up at least one level. Three: a Day-90 re-measurement that confirms the level change held. The 7 Levels Engagement is the LaunchReady program that runs this protocol.

How do I tell if my company is in this same shortfall?

Five questions surface it quickly. Do you have a documented AI strategy? Do you measure ROI on your AI investments? Do staff know your AI policy? Do they use only the tools you provide, or do they bring their own? Do you measure their proficiency or assume it? If the answers track the higher-ed pattern (strategy yes, measurement no, policy ambiguous, shadow tools active, proficiency unmeasured), your company is in the same shortfall.

Harrison Painter
Harrison Painter
AI Business Strategist. Founder, LaunchReady.ai and AI Law Tracker.

Harrison helps teams build AI systems that cut cost and grow revenue. Nearly twenty years of business experience. 2.8M YouTube views. Founder of LaunchReady.ai and the 7 Levels of AI Proficiency framework. Author of You Have Already Been Replaced by AI and The White-Collar Factory is Closing.

Connect on LinkedIn

Find your AI Proficiency level

The free 7 Levels of AI Proficiency assessment places you across seven stages of AI capability. Under ten minutes. Research-backed scoring.

Get the weekly briefing

LaunchReady Indiana delivers AI news, compliance updates, and case studies for Indiana leaders. Every Tuesday. Five minutes.

Subscribe free