AI Governance · Spoke Read

Shadow AI Inventory: A 7-Step Discovery Operation

An auditor will ask for a list of every AI system your organization uses. A 2025 IBM study found one in five breached organizations had a shadow-AI-linked incident. Here is how to run the seven-step discovery operation that answers both, in 30 days.

By Harrison Painter May 10, 2026 Updated May 12, 2026 8 min read

TL;DR. Shadow AI is the use of AI tools outside formal IT oversight. IBM's 2025 Cost of a Data Breach Report (Ponemon, 600 breached organizations, 16 countries, 17 industries, March 2024 through February 2025) found one in five breached organizations had a shadow-AI-linked incident, costing 670,000 dollars more on average ($4.63M vs $3.96M baseline), with 65 percent of those breaches exposing customer PII and 97 percent of AI-related breaches lacking AI access controls. The auditor question CEOs cannot yet answer is the same question regulators are starting to ask: list every AI system your organization uses. The seven-step discovery operation below builds that list in 30 days: survey, network audit, browser audit, procurement audit, sanctioned-tool catalog, four-tier risk classification, and a 90-day cadence.

The auditor question your organization probably cannot answer

Before any regulatory deadline, a single tactical question separates organizations that are ready from organizations that are not: list every AI tool your people use, with the contract status, the data sensitivity, and the business owner. Across the leadership teams I work with, the answer is almost always the same: the list does not exist.

The supporting numbers come from IBM's 2025 Cost of a Data Breach Report, prepared by the Ponemon Institute, covering 600 breached organizations across 16 countries and 17 industries with data collected from March 2024 through February 2025. One in five of those organizations had a breach involving shadow AI. Shadow-AI-linked breaches cost an average of $4.63 million versus the $3.96 million baseline for incidents without shadow-AI involvement, a 670,000-dollar premium. 65 percent of shadow-AI breaches exposed customer personally identifiable information. And among organizations breached through AI models or applications specifically, 97 percent reported lacking proper AI access controls.

The behavior is also more common than IT typically assumes. A 2025 Cybernews survey of more than 1,000 US employees found 59 percent use shadow AI tools at work, and 93 percent of executives and senior managers reported the same. The discovery operation has to assume usage at every layer of the org chart, including the office that authorizes the inventory.

The American Medical Association's 2024 Physician Sentiments survey found 66 percent of US physicians used AI tools in 2024, up from 38 percent in 2023, a 28-point year-over-year jump. IBM's 2025 report adds the financial layer: organizations breached through AI models or applications were overwhelmingly under-controlled, with 97 percent reporting they lacked AI access controls. The surface is large enough at every layer that informal awareness does not substitute for a structured discovery process.

What shadow AI is, what it is not, and the AI-BOM connection

Shadow AI is the use of AI tools, especially consumer large language models like ChatGPT, Claude, Gemini, Copilot, and Perplexity, by employees, contractors, executives, or business units outside formal IT and security oversight. Shadow AI usually means personal accounts on consumer AI platforms accessed through web browsers or personal devices, where the organization has no visibility into what data is being shared, no contractual data-protection terms with the AI provider, and no audit trail.

Shadow AI differs from sanctioned AI in five concrete ways. A sanctioned tool is procured by the organization, governed by an enterprise contract or business associate agreement where applicable, accessed through corporate single-sign-on, monitored by IT, and covered by an updated acceptable use policy. Shadow AI bypasses every one of those controls.

What does NOT count as shadow AI: an employee using the company's licensed Microsoft 365 Copilot through their corporate account is sanctioned AI. An IT-approved Claude Enterprise deployment with logged usage is sanctioned AI. A vendor-built application that uses an embedded LLM under a signed data-processing agreement is sanctioned AI. The distinction is governance, not the tool itself.

The companion concept is the AI Bill of Materials (AI-BOM): the technical extension of an AI inventory. An AI-BOM is a machine-readable manifest documenting every model, agent, MCP connector, third-party AI dependency, and embedded AI feature in an organization's environment, generated continuously rather than manually. NIST's AI Risk Management Framework places the AI inventory inside the GOVERN function as a first-step organizational control: mechanisms to inventory AI systems are required and should be resourced according to organizational risk priorities.

The seven-step discovery operation

The sequence below is engineered so the surface gets mapped before risk classification. Running classification before discovery produces a clean-looking inventory that misses the tools causing the actual exposure. The CEO has to authorize the no-punishment posture in Step 1 because punishment-led surveys produce false data, and false data wastes the entire effort.

  1. Send the no-punishment survey. Email every employee asking which AI tools they use, on what frequency, with what data types. Lead with: we are not penalizing anyone; we are mapping the surface so we can support the work safely. Keep it under eight questions. Include the option "I am not sure if what I use counts as AI."
  2. Pull the network and SaaS audit. Have IT export 90 days of DNS query logs and SaaS spend records. Filter for known AI domains: openai.com, anthropic.com, claude.ai, gemini.google.com, perplexity.ai, copilot.microsoft.com, cohere.com, mistral.ai, replicate.com, huggingface.co, midjourney.com, plus AI-feature SaaS (Notion AI, Zoom AI Companion, Adobe Firefly, Grammarly, Otter.ai). The list will run longer than the survey results.
  3. Audit browser activity. Many AI tools are browser-only. Endpoint browser-extension visibility, or commercial tooling such as Cyberhaven, LayerX, Wing Security, or Witness AI, reveals what tools employees access and what data is being pasted into them. This is where the 67 percent unmanaged-account number from LayerX shows up in your own data.
  4. Audit procurement card and expense data. Many shadow AI subscriptions appear on personal expense reports as 20-dollar-per-month tools with cryptic names. Pull 90 days of card data and search for ChatGPT, Claude, Gemini, Perplexity, ElevenLabs, Runway, Synthesia, HeyGen. Each match is either a sanctioned tool that was never centrally tracked, or a shadow tool the company is reimbursing.
  5. Build the sanctioned-tool catalog. List every AI tool the organization HAS approved, with vendor, contract type (BAA where HIPAA-relevant, DPA where GDPR-relevant), SSO integration, permitted data types, and business owner. Many organizations discover the sanctioned catalog is 10 to 20 tools, while Steps 1 through 4 have surfaced over 100.
  6. Risk-classify every discovered tool. Four tiers. Tier 1: sanctioned, contracted, low-risk data. Tier 2: sanctioned, contracted, sensitive data permitted. Tier 3: shadow but low-risk. Tier 4: shadow plus sensitive data, immediate intervention. Tier 4 items get a 30-day remediation plan: migrate the workflow to a Tier 1 or Tier 2 tool, or accept and document the risk with executive sign-off.
  7. Lock the quarterly cadence. AI tool adoption moves faster than annual policy reviews. Run the inventory every 90 days at minimum. Each cycle catches new tools added in the prior quarter. Organizations that lock the cadence build durable visibility; organizations that run it once produce a snapshot that ages out in six months.

For a mid-market company (100 to 2,000 employees), the first inventory cycle takes about 30 days from CEO sponsorship to risk-classified output. Larger enterprises or regulated industries may need 60 to 90 days. The 7 Levels of AI Proficiency points to the second-order benefit: the executive who runs the first cycle personally develops Level 4 Commander capability, which is exactly the context-engineering competency the next governance practice will need.

The regulatory context behind the inventory

An AI inventory is the foundational artifact every active AI governance regime asks for first. The current US and EU landscape:

EU AI Act. The high-risk system provisions take effect August 2, 2026. Article 99 sets a 15 million euros or 3 percent of worldwide annual turnover tier for many noncompliance categories tied to high-risk AI system obligations (providers, authorized representatives, importers, distributors, deployers, notified bodies, and the Article 50 transparency requirements), separate from the 35 million euros or 7 percent tier reserved for Article 5 prohibited AI practices. For shadow-AI-inventory exposure tied to high-risk systems used in employment, education, essential services, or law enforcement, the 15M / 3 percent tier is the relevant one.

Colorado AI Act (SB 24-205). Originally scheduled to take effect June 30, 2026. On April 27, 2026 a federal court paused enforcement following litigation (xAI v. Weiser, with DOJ intervention). Colorado lawmakers have since advanced SB 26-189, a replacement bill that would narrow and revise the original framework (eliminating bias audits and impact assessments in favor of transparency measures, with a January 1, 2027 effective date). SB 26-189 passed the Senate on May 7, 2026 and the House on May 9, 2026, and is headed to Governor Polis for signature. The original law has not simply disappeared; companies with Colorado exposure should still complete inventory work.

NIST AI RMF. The GOVERN function (Section 5 of the framework core) requires that organizations maintain mechanisms to inventory AI systems. This is a voluntary framework in the US but is widely referenced in procurement questionnaires, audit findings, and federal contracting.

HIPAA. Any AI vendor that creates, receives, maintains, or transmits protected health information on behalf of a covered entity is a business associate under HIPAA and must sign a Business Associate Agreement. Shadow AI in healthcare environments is a HIPAA exposure surface regardless of any AI-specific regulation.

The common thread: every regime above presumes the existence of an AI inventory. None of them issue a starter template. The discovery operation is the work organizations have to do themselves.

Indiana operators: high-trade-secret and high-PHI exposure

Indiana mid-market companies sit at the sharp end of the shadow AI exposure curve for three reasons.

First, the industries that dominate Indiana's economy carry the highest data sensitivity per shadow AI exposure. Manufacturing carries trade-secret risk: Cummins, Allison Transmission, Subaru of Indiana, and Toyota Material Handling run process specs, supplier pricing, and proprietary defect-detection logic that becomes AI training input the moment a shop-floor manager pastes a draft into ChatGPT. Healthcare and life sciences carry PHI risk: Eli Lilly, Roche Diagnostics, IU Health, Community Health Network, Eskenazi, and Parkview each have HIPAA exposure on every AI interaction with patient data. The AMA's 2024 Physician Sentiments on Augmented Intelligence survey found 66 percent of US physicians used AI tools in 2024, up from 38 percent in 2023, while standard healthcare BAAs frequently are not tailored to AI vendor use (Foley & Lardner, May 2025). Financial services adds its own surface: OneAmerica, Old National, Salin, and Centier handle nonpublic personal information that triggers state and federal privacy law.

Second, Indiana's regulatory environment is extraterritorial-exposed. An Indianapolis manufacturer selling to EU customers is in scope for the EU AI Act compliance deadline of August 2, 2026 with the 15 million euros / 3 percent of worldwide turnover penalty tier for high-risk-system non-compliance. The Colorado AI Act was paused by federal court order April 27, 2026 and is under legislative reconsideration, but Indiana companies that hire in Colorado should still complete inventory work because the underlying law has not been repealed.

Third, the Indiana state posture ships training but no inventory tooling. Governor Braun's IN AI initiative announced April 28, 2026 (executed through CICP) is workforce-adoption focused. Public materials emphasize identifying AI opportunities, technical support, and connecting employers with training resources and student talent. They do not appear to provide a dedicated shadow AI inventory program, model policies, or compliance toolkit. Innovation Connector and Conexus Indiana ship training too; none ships a shadow-AI inventory program. Indiana's AI-related legal landscape is still narrow. HB 1133 created political deepfake disclaimer rules in 2024. HB 1271, effective July 1, 2026, limits certain healthcare downcoding and recoupment practices, including limits on automated or AI-only downcoding. HB 1620, a 2025 healthcare AI disclosure bill, was introduced but did not become law. The inventory-tooling shortfall is consistent across most US states; Indiana mid-market CEOs do not have Fortune 500 security budgets to backfill it.

How shadow AI maps to the AI Governance Maturity Model

The AI governance maturity model sequences organizations across five stages, from Ungoverned (Level 0) to Strategic (Level 4). Shadow AI presence is the cleanest leading indicator of which stage an organization actually sits at, regardless of what its policy library says.

Level 0: Ungoverned. No AI inventory exists. Shadow AI is rampant and undocumented. A large share of organizations sit here today.

Level 1: Aware. Leadership knows shadow AI is happening. No formal inventory yet. Policies may exist but are not enforced or measured.

Level 2: Inventoried. A formal AI inventory exists, refreshed at least annually. Sanctioned and shadow tools are both cataloged. Risk classification is in place.

Level 3: Continuous. The inventory is updated continuously (quarterly cadence at minimum), backed by automated discovery tooling. Shadow AI is detected within 30 days of adoption. Tier 4 items get remediated under a documented timeline.

Level 4: Strategic. The AI inventory is one feed inside an organizational governance pipeline. The AI-BOM is generated automatically. Shadow AI is read as a workflow signal that informs sanctioned-tool procurement decisions.

Many organizations are at Level 0 or Level 1 today. The 7-step inventory above moves an organization from Level 0 to Level 2 in 30 days. The quarterly cadence (Step 7) is what locks Level 3.

Shadow AI is information about your organization's actual workflow, surfaced for free. The cost is incurred by ignoring it; the value is captured by reading it.

How The 7 Levels of AI Proficiency integrates

Org-level governance maturity (the 5-level model above) and individual-level proficiency (The 7 Levels of AI Proficiency) are two different axes. Both are required. Org maturity Level 2 to Level 3 needs individual L4-L5 on the team building the governance. Org maturity Level 4 needs individual L5-L7.

Leading a shadow AI inventory effort calls for a Level 4 Commander minimum on The 7 Levels of AI Proficiency. The framework defines Level 4 Commander as the Context Engineer: the operator who manages conversation lifecycles, recognizes when context degrades, and reads environmental cues for what is and is not in scope. The underlying human skill at Level 4 is social awareness.

A shadow AI inventory is exactly a context-engineering problem at organizational scale. It requires the executive sponsor to read the actual operational context (what tools are running, not what the org chart says), recognize when the official narrative ("we use sanctioned tools only") has drifted from reality, manage the conversation with employees who use shadow tools (nondefensive intake without punishment), and sequence the discovery work so the surface gets mapped before risk classification, not after.

Level 1 Cadet and Level 2 Ensign leaders tend to treat shadow AI as enforcement, not visibility. Level 3 Lieutenant leaders use AI well personally but cannot yet structure an organizational discovery process. Level 4 is the floor for leading the inventory itself. Level 5 Captain extends to designing the ongoing intake mechanism. Level 6 Admiral builds the AI-BOM as a reusable workflow. Level 7 Mission Director operates the inventory as one feed inside an organizational AI governance pipeline.

The crossover insight: per Grant Thornton's April 2026 AI Impact Survey of nearly 1,000 senior US business leaders, 78 percent lack full confidence they could pass an independent AI governance audit within 90 days, in part because the context-engineering competency the audit defense requires has not yet been built. The shadow AI inventory is the cheapest first practice to develop it.

Frequently asked questions

What is shadow AI?

Shadow AI is the use of AI tools, especially consumer LLMs like ChatGPT, Claude, Copilot, Gemini, and Perplexity, by employees, contractors, executives, or business units outside formal IT oversight. Shadow AI bypasses corporate single-sign-on, vendor contracts, data-protection terms, and audit trails. A 2025 Cybernews survey of more than 1,000 US employees reported that 59 percent use shadow AI tools at work, including 93 percent of executives and senior managers.

How do I find out what AI tools my employees are actually using?

Run a seven-step discovery operation: a no-punishment survey, a 90-day DNS and SaaS audit, browser visibility tooling, procurement card audit, sanctioned-tool cataloging, four-tier risk classification, and a 90-day re-inventory cadence. The IT team alone cannot complete this; the CEO has to authorize the no-punishment posture for the survey to produce real data.

Is shadow AI illegal?

Shadow AI itself is not illegal in the US. The activities it enables can violate other laws. Pasting protected health information into ChatGPT violates HIPAA if no business associate agreement is in place. Uploading customer personal data may violate GDPR or state privacy laws. Using AI in employment decisions without disclosure may, in some jurisdictions, trigger state AI laws. Note that Colorado SB 24-205, originally scheduled for June 30, 2026 enforcement, was paused by a federal court on April 27, 2026 and is under legislative reconsideration. Indiana HB 1620 (introduced 2025) did not become law.

What is the difference between an AI inventory and an IT asset list?

An IT asset list catalogs hardware and licensed software. An AI inventory catalogs every AI tool in use, including web-only consumer tools the organization never licensed, AI features embedded in non-AI software, and shadow tools accessed through personal accounts. AI inventories often surface tools that traditional IT asset lists miss, especially browser-only tools, embedded AI features, and personal-account usage, because most AI tools are not procured through traditional IT channels.

How often should we run an AI inventory?

Quarterly at minimum. AI tool adoption moves faster than annual policy reviews. The 2025 Cybernews survey of more than 1,000 US employees found 59 percent use shadow AI tools at work, and 93 percent of executives and senior managers reported the same; the average organization is therefore likely picking up new AI tools every month through ordinary employee experimentation. Organizations that run the inventory once produce a snapshot that ages out in six months; organizations that lock the 90-day cadence build durable organizational visibility.

What should we do when we discover unauthorized AI use?

Three responses, in order. Classify the risk (is sensitive data flowing?), find the workflow problem (why did the employee choose this tool over the sanctioned one?), and either migrate the workflow onto a sanctioned tool or sanction the new tool with proper contracts and access controls. Punishment-first responses make the next inventory cycle produce false data, which is worse than the original shadow tool.

Does HR need to be involved in AI inventory?

Yes, for two reasons. HR owns the acceptable use policy that needs updating to address AI specifically. According to ISACA 2025 research, only about 31 percent of organizations have a formal AI policy in place. HR also owns the employment-decision risk surface that triggers state AI laws and similar regulations. AI inventory is a cross-functional governance practice spanning HR, Legal, IT Security, Procurement, and the executive sponsor.

What is a reasonable scope for a first AI inventory?

For a mid-market company with 100 to 2,000 employees, a first inventory should target 30 days of discovery work and surface every AI tool used in the prior 90 days, classified into four risk tiers, with Tier 4 items (shadow plus sensitive data) carrying a 30-day remediation plan. Larger enterprises or regulated industries may need 60 to 90 days for a complete first inventory; smaller organizations under 100 employees can complete a first cycle in two weeks.

Sources

  • IBM Security and Ponemon Institute (2025). Cost of a Data Breach Report 2025. 600 breached organizations globally, data collected March 2024 through February 2025; 3,470 security and C-suite interviews. ibm.com/think/x-force/2025-cost-of-a-data-breach-navigating-ai and newsroom.ibm.com (July 30 2025 release). Methodology scope (17 industries, 16 countries and regions) confirmed in Baker Donelson 2025 analysis. jdsupra.com.
  • Cybernews (2025). Survey of 1,000+ US employees on shadow AI use. cybernews.com/ai-news/ai-shadow-use-workplace-survey.
  • LayerX Security (2025). Enterprise AI and SaaS Data Security Report 2025. Browser-telemetry analysis. layerxsecurity.com.
  • American Medical Association (2024). Physician Sentiments on Augmented Intelligence. 66% of physicians used AI in 2024, up from 38% in 2023. ama-assn.org.
  • Foley & Lardner LLP (May 2025). HIPAA Compliance for AI in Digital Health: What Privacy Officers Need to Know. foley.com.
  • NIST AI Risk Management Framework 1.0 (2023) and AI RMF Playbook (active resource). GOVERN 1.6 inventory mechanism guidance. airc.nist.gov.
  • ISACA (2025). AI use is outpacing policy and governance: roughly 31% of organizations have a formal AI policy. isaca.org.
  • European Parliament and Council. Regulation (EU) 2024/1689 (Artificial Intelligence Act), Article 99 Penalties. 15M / 3% tier for non-compliance of providers, deployers, importers, distributors, and Article 50 transparency obligations. artificialintelligenceact.eu/article/99.
  • Baker McKenzie Connect On Tech (April 28, 2026). Colorado Two-Step: a federal court pauses enforcement of Colorado SB 24-205. connectontech.bakermckenzie.com.
  • Holland & Knight (April 2026). EU AI Act August 2, 2026 deadline analysis. hklaw.com.
  • Indiana Capital Chronicle (April 28, 2026). Governor Braun unveils AI business portal. indianacapitalchronicle.com.
  • AI Law Tracker (real-time governance reference). ailawtracker.org/governance.

This article is informational only. It is not legal advice. Consult counsel before making compliance decisions.

Harrison Painter
Harrison Painter
AI Business Strategist. Founder, LaunchReady.ai and AI Law Tracker.

Harrison helps teams build AI systems that cut cost and grow revenue. Nearly twenty years of business experience. 2.8M YouTube views. Founder of LaunchReady.ai and the 7 Levels of AI Proficiency framework. Author of You Have Already Been Replaced by AI and The White-Collar Factory is Closing.

Connect on LinkedIn

Find your AI Proficiency level

The free 7 Levels of AI Proficiency assessment places you across seven stages of AI capability. Under ten minutes. Research-backed scoring.

Get the weekly briefing

LaunchReady Indiana delivers AI news, compliance updates, and case studies for Indiana leaders. Every Tuesday. Five minutes.

Subscribe free