AI Governance · Assessment Instrument

AI Governance Maturity Model: 5 Stages, 7 Domains, 10-Minute Self-Assessment

A descriptive and diagnostic instrument for placing your organization on a 5-stage scale across 7 operational domains. Score in 10 minutes. Advance with a per-stage playbook.

By Harrison Painter May 10, 2026 Updated May 10, 2026 9 min read

The AI Governance Maturity Model is a 5-stage scale (Ungoverned, Aware, Structured, Managed, Strategic) that measures organizational AI governance across 7 operational domains. To self-assess, score each domain from 0-4 based on what evidence you can produce. Your org-level maturity equals your lowest domain score, because a chain breaks at its weakest link. Most organizations finish in 10 minutes and discover they sit at Level 0 or Level 1. Per-stage advancement playbooks set the time and cost for moving up: 4-6 weeks and under $5,000 to reach Level 1, 12-18 months and $200,000-plus annually to reach Level 4.

Where most companies sit, and why it matters now

60% of organizations plan to deploy agentic AI within the next 12 months. Only 26% have comprehensive AI governance policies in place. That 34-point readiness shortfall is not a curiosity. It is a structural exposure that shows up in breach cost, insurance premiums, regulatory exposure, and enterprise sales cycles.

The numbers come from the same source. CSA and Google Cloud (State of AI Security and Governance, December 2025, n=300 IT and security professionals) report that 54% of organizations already use public frontier LLMs, 60% plan to deploy agentic AI within 12 months, and 26% have comprehensive AI governance policies. IBM (Cost of a Data Breach Report 2025) reports that 63% of breached organizations have no AI governance policy or are still developing one. Censinet (AI Adoption Survey 2025) reports that 84% of healthcare organizations have established an AI Governance Committee, but operational maturity lags the committee by 12-24 months in most cases. The Level 1 trap is the most common failure pattern: organizations confuse policy existence with policy enforcement.

The cluster news peg sharpens the timeline. Colorado AI Act SB 24-205 is scheduled to take effect June 30, 2026 (a federal court paused enforcement on April 27, 2026; the effective date is contingent on the litigation). California AI Transparency Act SB 942 takes effect August 2, 2026, the same day EU AI Act Article 50 enforcement begins. Indiana HB 1620 (healthcare AI disclosure, introduced 2025; available bill trackers show it did not become law) sits as a near-miss that signals where Indiana's regulatory floor is heading. The market for "we will figure governance out later" has been closing for the better part of a year.

The 5 stages of AI governance maturity

The 5-level scale is the canonical AI Law Tracker model (ailawtracker.org/governance). Names and order are verbatim. Operational definitions follow.

Level 0: Ungoverned

No policy exists. No designated owner. Employees use AI tools at their discretion with no oversight. Shadow AI is the default mode of adoption. The IT team cannot answer "which AI tools are being used here right now." No record exists of who approved any AI vendor. Customer-facing chatbots run without human review of outputs. Leadership often assumes "we don't really use AI" while sales, marketing, finance, and engineering have all signed up for ChatGPT, Claude, Copilot, and Gemini accounts independently.

Level 1: Aware

A 1-3 page AI acceptable use policy lives in the employee handbook. A senior leader has been designated as "responsible for AI" but the role is shared, not defined. Some teams have approved-tool lists; others do not. Training is one-time at hire, not recurring. Incident response is reactive: when something goes wrong, leadership decides what to do in the moment. The Censinet finding (84% of healthcare orgs with an AI Governance Committee) is the cleanest example of a Level 1 trap. The committee exists. The org thinks it sits at Level 3.

Level 2: Structured

The AI tool inventory is current within 30 days. Data classification tiers (Public, Internal, Confidential, Restricted) map to AI use rules. A vendor due diligence checklist exists and runs before any new AI tool gets approved. Quarterly training is mandatory for AI-using roles. Customer-facing AI use carries required disclosures, including chatbot identification and AI-generated content labels. The drift risk at Level 2 is documentation drift: the processes are written down, but the audit cadence stops after the first quarter and the org backslides to Level 1 without noticing.

Level 3: Managed

AI tool usage is monitored through DLP rules, network-level visibility, or enterprise license analytics. Vendors get annual security reviews, contract reviews, indemnification language, and IP ownership audits. The incident response runbook exists, has been tabletop-tested, and the response time is measured. A quarterly governance review surfaces metrics to the executive team or board. Approved-versus-shadow ratio is measured (target: shadow AI under 10% of total AI spend). The trap at Level 3 is treating it as the destination. Without the continuous improvement loop of Level 4, today's Managed posture becomes next year's Aware posture.

Level 4: Strategic

Governance is a competitive advantage, not a cost center. Governance review cadence is monthly at the operations level and quarterly at the executive level. Metrics include leading indicators (training completion, audit findings, time-to-onboard new AI tool) not just lagging indicators (incidents, breaches). AI governance is a named line item in the cyber insurance application and produces a measurable premium reduction. The governance team is proactively scoping new AI capabilities for the business, not just gating them. The continuous improvement cycle is documented: each quarter produces 2-4 specific governance upgrades. The trap is confusing Strategic with Bureaucratic. Strategic governance enables more AI use, faster.

How to self-assess in 10 minutes

The 7-domain by 5-level matrix is the self-assessment instrument. Read down each row. Pick the highest level you can defend with a document or audit log. If you cannot point to evidence, drop to the level below. The aggregation rule is the chain rule: org-level maturity equals the lowest domain score.

The 7 governance domains are canonical AI Law Tracker. Each cell describes what THIS domain at THIS level looks like in operational terms.

Domain 0: Ungoverned 1: Aware 2: Structured 3: Managed 4: Strategic
AI Inventory and Shadow AI No inventory. Leadership cannot list tools in use. Inventory attempted; outdated within 30 days. Inventory current within 30 days; refreshed quarterly. Continuous monitoring; shadow AI under 10% of spend. Real-time inventory; shadow AI under 5%; pre-approval pipeline.
Data Protection and Classification No data rules for AI. Anything goes into prompts. Verbal rules exist; "don't put customer data in." Written classification tiers; DLP rules applied to AI inputs. Automated DLP enforcement; quarterly audit of inputs. Predictive controls; classification informed by use-case risk scoring.
Vendor Management and Deployer Liability No vendor review for AI tools. Procurement aware; reviews are inconsistent. Standard checklist applied before any AI vendor contract. Annual vendor reviews; contract clauses cover IP, indemnification, training-data rights. Continuous vendor performance monitoring; contract templates updated quarterly to current regulation.
Human Oversight and Decision Authority AI outputs used directly with no review. Human review encouraged but not required. Decision-tier matrix defines what requires human review. Human-in-the-loop logged and audited; escalation paths tested. Risk-tiered oversight scaled to decision impact; oversight cost tracked.
Transparency and Disclosure No customer-facing AI disclosures. Some chatbots labeled; AI-generated content unmarked. Standard disclosures on customer-facing AI; internal use-case register. Disclosures audited; customer disclosures match actual practice. Disclosures are a competitive trust asset; published transparency report annually.
Incident Response and Monitoring No process. Response is improvised when an incident surfaces. Incident reporting requested; no defined runbook. Runbook exists; tabletop-tested annually. Runbook tabletop-tested quarterly; mean time to contain measured. Incident learnings feed back into policy quarterly; AI-specific scenarios refreshed.
Training and AI Literacy No training. Onboarding mentions AI in passing if at all. One-time training at hire. Annual mandatory training for AI-using roles. Quarterly micro-training; role-based curricula; competency tested. Training tied to The 7 Levels of AI Proficiency assessment; individual development plans live.

The chain rule matters because of how risk concentrates. An organization with 5 domains at Level 3 and 2 domains at Level 1 is a Level 1 organization for breach-cost, insurance, and regulatory-risk purposes. The IBM 2025 finding that 97% of organizations experiencing AI-related breaches lacked AI access controls (a single-domain failure that spans Vendor Management and Human Oversight) makes the chain-strength model load-bearing in practice, not just on paper.

How to advance from Level N to Level N+1

The advancement playbook is calibrated against industry-consensus implementation guides (Liminal 2025, Promethium 2025) and IBM 2025 breach-cost data. Time and cost ranges assume a mid-market organization (100-5,000 employees) with executive support.

0 to 1: Establish Awareness (4-6 weeks, under $5,000)

  1. Designate an owner. A single named person (typically COO, CIO, or General Counsel) accountable for AI governance. Calendar invite for a recurring monthly review.
  2. Draft a 2-page acceptable use policy. Approved tools, prohibited data inputs, disclosure rules for customer-facing AI. Template-driven; legal review optional at this stage.
  3. Quick AI tool census. Email survey to all department heads: "List every AI tool your team uses, paid or free." Compile into a single spreadsheet.
  4. Communicate the policy in a single all-hands. Not a training module yet. Just announcement and Q&A.
  5. Cost: under $5,000 if drafted in-house; up to $15,000 with light external counsel review.

1 to 2: Build Structure (2-3 months build, 4-6 months adoption, $15,000-$50,000)

  1. Stand up a tool-approval pipeline. Procurement and IT security review any new AI tool before it gets a credit card or SSO entry. Document the review checklist.
  2. Implement data classification rules. Public, Internal, Confidential, Restricted. Map to AI input rules: which tier can go into which tools.
  3. Deploy DLP controls. Modern endpoint DLP (Microsoft Purview, Google Workspace DLP, Netskope, Zscaler) applied to AI-tool inputs.
  4. Annual mandatory training rollout. A 30-60 minute curriculum tied to the policy. Completion tracked.
  5. First vendor reviews. Top 5 AI vendors get a security review and contract review covering IP ownership, training-data rights, indemnification.
  6. Cost: $15,000-$50,000 depending on existing DLP infrastructure and size of AI footprint.

2 to 3: Active Management (6-12 months, $50,000-$200,000)

  1. Monitoring infrastructure. Centralized log aggregation for AI-tool usage. Shadow AI detection running.
  2. Vendor oversight cadence. Annual full review with quarterly check-ins for top-tier vendors.
  3. Incident response runbook plus tabletop. Build the runbook. Run a tabletop exercise. Measure mean time to contain.
  4. Quarterly governance review at executive level. Board-level metrics surfaced.
  5. Quarterly micro-training. Role-based curricula, competency tested.
  6. Cost: $50,000-$200,000 depending on monitoring tooling and headcount allocation.

3 to 4: Strategic Posture (12-18 months, $200,000+ per year ongoing)

  1. Continuous improvement cycle. Each quarter produces 2-4 named governance upgrades, tracked against outcomes.
  2. Insurance and regulatory upside. Document maturity in cyber insurance applications and regulatory filings. Cyber insurance underwriters are increasingly tying coverage and premium decisions to AI governance documentation; lack of documentation leads to coverage denials, premium increases, and affirmative AI exclusions (Aon AI Risk 2026; ISACA Cyber Insurance and AI Blind Spots, 2025).
  3. Governance as a sales asset. Customer-facing transparency report; governance posture used in enterprise RFP responses.
  4. Tie individual proficiency to org maturity. Roll out The 7 Levels of AI Proficiency assessment so individual development plans align with org governance posture.
  5. Predictive risk scoring. Use-case risk scoring at design time, not at deployment time.
  6. Cost: $200,000+ per year ongoing, offset by insurance reductions and faster regulatory clearance.

An organization with 5 domains at Level 3 and 2 domains at Level 1 is a Level 1 organization for breach-cost, insurance, and regulatory-risk purposes. The chain rule is load-bearing.

Indiana baseline: where the mid-market actually sits

Indiana mid-market AI governance baseline sits at Level 0 to Level 1. The Bryce Carpenter (COO, Conexus Indiana) on-record interview from the AI Ready Podcast (April 2026) quantified the disconnect: 80% of Indiana CEOs feel behind on AI; the majority of Indiana mid-market manufacturers and distributors have no formal AI policy, no AI inventory, and no designated owner. This is the Ungoverned-to-Aware band. Cummins, Allison Transmission, Subaru of Indiana, and Toyota Material Handling all carry meaningful AI exposure with manufacturing IP at stake; few have published a Level 2 governance posture.

Indiana healthcare runs hotter, pulled to Level 2 by HIPAA. The 84% of healthcare organizations with an AI Governance Committee (Censinet 2025) maps to Indiana health systems including IU Health, Community Health Network, Eskenazi Health, and Parkview Health, all with named AI governance bodies. Operational maturity (active monitoring, tested incident response, audited vendor oversight) lags the committee by 12-24 months in most Indiana health systems. The Eli Lilly and Indiana University (IU) $40M partnership announced December 3, 2025, which includes building AI-enabled clinical trial infrastructure, raises the operational bar, because clinical-trial AI ties directly into IRB, FDA, and HIPAA oversight surfaces that already require Level 2 discipline at minimum.

Indiana state government formalized its position with Governor Braun's IN AI Initiative (announced April 28, 2026, executed by CICP umbrella). The initiative is a Level 1 Aware-tier signal at the state level: policy exists, owner designated, enforcement informal. The Indiana posture also includes the in.gov/mph/AI 3-tier (High, Moderate, Low Risk) NIST-anchored model overseen by the Office of Chief Data Officer and Chief Privacy Officer.

Indiana mid-market companies doing it well cluster around BioCrossroads-affiliated life-sciences firms and a few advanced manufacturers in the Conexus network. These are the Level 2-3 companies, and the count is in single digits.

How this model relates to The 7 Levels of AI Proficiency

The AI Governance Maturity Model is org-level. The 7 Levels of AI Proficiency is individual-level. Both are required, and they compose; one without the other produces predictable failure modes.

The pairing rules are observable. An organization at Governance Level 0 or Level 1 needs Level 3-4 individuals (Lieutenant or Commander) on the team building governance. Level 2 governance needs Level 4 individuals leading the build and Level 3 operating it. Level 3 governance needs Level 5 individuals (Strategist) designing it and Level 4 maintaining it. Level 4 governance needs Level 5-6 individuals driving the continuous improvement loop.

The failure modes the pairing catches are recurrent. An organization tries to advance to Level 2 governance without a Level 3+ individual on staff: the policy gets written but never operationalized. An organization reaches Level 3 governance but team median sits at Level 2: monitoring runs, but findings never get acted on. An organization claims Level 4 governance with no Level 5+ individuals: governance is performance theater, not adaptive capability.

The clean read for an executive is two assessments stacked. The org-level maturity score (this page) and the individual proficiency score (the assessment at assess.launchready.ai) together produce a complete view. Close whichever shortfall is biggest first. Related reading: How to measure AI readiness in a team.

Where to start your assessment today

Five practical steps to convert the 10-minute self-assessment into action this week.

  1. Score your organization across the 7 domains. Use the matrix above. Drop a level for any domain you cannot defend with documentation. Total = lowest score.
  2. Open the canonical reference. The AI Law Tracker governance page (ailawtracker.org/governance) carries the 5-level diagram and the 7-domain definitions in their canonical form.
  3. Identify the lowest-scoring domain. That is the chain weak link. Read the corresponding cluster spoke (shadow AI inventory, data classification, vendor liability, human oversight, transparency, incident response, AI literacy) for domain-specific operational guidance. The 7 spokes publish in this cluster.
  4. Take the individual proficiency assessment. The free 7 Levels of AI Proficiency assessment (assess.launchready.ai) places team members across 7 stages of individual capability. Compare the org maturity score to the team proficiency median. Mismatches are the leading indicators of governance build failure.
  5. Pick the smallest next move. If you are at Level 0, the smallest next move is naming an owner. If you are at Level 1, it is starting an inventory. Velocity beats scope at every stage.

Frequently asked questions

What is an AI Governance Maturity Model?

An AI Governance Maturity Model is a descriptive and diagnostic framework that places organizations on a 5-level scale (Ungoverned, Aware, Structured, Managed, Strategic) based on observable governance practices across 7 domains. The model surfaces where an organization sits today and what specific actions advance it to the next level. It is org-level, not individual-level.

How do I assess my company's AI governance maturity?

Use the 7-domain by 5-level matrix as a self-assessment instrument. For each of the 7 domains (AI Inventory, Data Protection, Vendor Management, Human Oversight, Transparency, Incident Response, Training), score from 0-4 based on what evidence you can produce. Org-level maturity equals your lowest domain score (chain-strength rule). Most organizations finish in 10 minutes.

How do I move from Level 1 to Level 2?

Stand up a formal tool-approval pipeline, implement data classification rules, deploy DLP controls on AI inputs, run annual mandatory training, and complete vendor reviews on the top 5 AI vendors. Industry consensus puts this advancement at 2-3 months for the build and 4-6 months for operational adoption, at $15,000-$50,000 for mid-market organizations.

What is the difference between AI maturity and AI governance maturity?

AI maturity measures how broadly an organization has deployed AI capability. AI governance maturity measures how well the organization controls and oversees that AI use. An organization can be high on AI maturity (deployed across functions) and low on governance maturity (no policy, no oversight). The 2025 CSA / Google Cloud finding that 60% of organizations plan to deploy agentic AI within 12 months while only 26% have comprehensive AI governance policies shows the readiness disconnect at scale.

How long does it take to reach Level 3 (Managed)?

Industry consensus puts the advancement from Level 1 to Level 3 at 12-24 months of sustained investment. Level 0 to Level 3 typically takes 18-36 months for mid-market organizations with executive support. The pace is governed less by implementation difficulty than by leadership attention and organizational change capacity.

Is Level 4 (Strategic) realistic for mid-market companies?

Yes, but rare today. For mid-market companies (100-5,000 employees), Level 4 typically requires 2-3 years of sustained investment, an embedded governance function (1-3 FTE), and tight integration with cyber insurance, regulatory operations, and sales operations. The ROI shows up in faster enterprise sales cycles and improved cyber insurance posture; underwriters are increasingly tying coverage and premium decisions to documented AI governance, and lack of documentation leads to coverage denials, premium increases, and affirmative AI exclusions (Aon AI Risk 2026; ISACA 2025).

What is a typical AI governance maturity score in 2026?

Industry data clusters most organizations at Level 0-1. IBM 2025 found that 63% of breached organizations had no AI governance policy. CSA / Google Cloud 2025 found that only 26% of organizations have complete AI security governance. The median sits between Ungoverned and Aware. Healthcare runs higher (Level 1-2 driven by HIPAA). Financial services runs higher still (Level 2 driven by existing model risk management discipline).

How does the AI Governance Maturity Model relate to The 7 Levels of AI Proficiency?

The maturity model is org-level. The 7 Levels of AI Proficiency is individual-level. Both are required. An organization advancing to Level 2 governance needs Level 3-4 individuals on staff to build it. An organization at Level 3 governance needs Level 5+ individuals to sustain it. Take both assessments to see where the organization sits and where individual capability sits, then close whichever shortfall is biggest first.

Sources

This article is informational only. It is not legal advice. Consult counsel before making compliance decisions.

Updated May 10, 2026.

Harrison Painter
Harrison Painter
AI Business Strategist. Founder, LaunchReady.ai and AI Law Tracker.

Harrison helps teams build AI systems that cut cost and grow revenue. Nearly twenty years of business experience. 2.8M YouTube views. Founder of LaunchReady.ai and the 7 Levels of AI Proficiency framework. Author of You Have Already Been Replaced by AI and The White-Collar Factory is Closing.

Connect on LinkedIn

Find your AI Proficiency level

The free 7 Levels of AI Proficiency assessment places you across seven stages of AI capability. Under ten minutes. Research-backed scoring. Pair the result with this org-level maturity score for a complete view.

Get the weekly briefing

LaunchReady Indiana delivers AI news, compliance updates, and case studies for Indiana leaders. Every Tuesday. Five minutes.

Subscribe free