AI Governance · Vendor Liability

AI vendor management and deployer liability: a 2026 CEO guide

The AI vendor file is becoming a legal control. Colorado's original AI law is paused, a replacement bill is on the governor's desk, and the EU AI Act high-risk timeline is shifting. The Indiana mid-market CEO still needs the vendor file fixed before enforcement arrives.

By Harrison Painter As of May 12, 2026 Updated May 12, 2026 9 min read
  • Multi-Jurisdiction
  • CO: SB 26-189 to Polis

TL;DR

The buyer can no longer hide behind the vendor. Providers still carry major design and compliance duties under the EU AI Act, but deployers now carry direct operational obligations that cannot be outsourced by contract. EU AI Act Article 26 places twelve operational duties on deployers of high-risk AI systems (human oversight, input data relevance, log retention, monitoring, worker notification, and more), backed by Article 99 penalties of up to 15 million euros or 3 percent of worldwide annual turnover for many noncompliance categories, separate from the 35 million euros or 7 percent tier reserved for Article 5 prohibited AI practices. Colorado's original SB 24-205 followed a similar shape but enforcement was paused after xAI v. Weiser, and Colorado lawmakers have since sent SB 26-189 to Governor Polis as a repeal-and-replace framework focused on automated decision-making technology, with key obligations starting January 1, 2027 if enacted. The Massachusetts Attorney General collected $2.5 million from Earnest Operations in July 2025 for AI-underwriting design choices the AG alleged produced disparate impact. The Indiana CEO playbook: a 12-item due diligence checklist, 5 contract clauses written for counsel review, and the recognition that vendor selection is now a Level 5 plus Level 6 cross-functional decision in The 7 Levels of AI Proficiency.

The legal architecture flipped: buyers now carry more exposure than builders

In July 2025, the Massachusetts Attorney General announced a $2.5 million settlement with Earnest Operations LLC, a Delaware-based student-loan lender. The AG alleged that Earnest used AI underwriting models with two specific design choices that produced disparate impact: a "Cohort Default Rate" variable that penalized Black, Hispanic, and non-citizen applicants more often than white applicants, and a "Knockout Rule" that auto-denied applications based on immigration status. Earnest paid $2.5 million to the Commonwealth and agreed to stop using both the Cohort Default Rate variable and the immigration-status Knockout Rule.

That outcome is the working example of the architecture every Indiana mid-market CEO needs to read this quarter. Under EU AI Act Article 26, deployers of high-risk AI systems carry twelve operational obligations directly: human oversight, input data governance, six-month log retention, worker notification, monitoring with provider notification, fundamental rights impact assessments for public-facing high-risk systems, and cooperation with market surveillance authorities. Article 99 sets a 15 million euros or 3 percent of worldwide annual turnover tier for many noncompliance categories tied to high-risk AI system obligations, separate from the 35 million euros or 7 percent tier reserved for Article 5 prohibited AI practices. The high-risk compliance timeline is now in transition: EU lawmakers reached a provisional agreement in May 2026 that would defer many standalone Annex III high-risk AI obligations to December 2, 2027 and certain embedded high-risk obligations in regulated products to August 2, 2028. Companies should still design their vendor files toward Article 26 now, because procurement and implementation cycles will outlast the delay.

Colorado followed a similar shape at the state level with SB 24-205, but enforcement was paused on April 27, 2026 following xAI v. Weiser, and SB 26-189 has since been sent to Governor Polis as a repeal-and-replace framework focused on automated decision-making technology, with key obligations starting January 1, 2027 if enacted. Indiana HB 1271 (signed by Governor Braun on March 4, 2026; effective July 1, 2026) restricts AI-only adverse determinations in healthcare claims. California SB 942 (operative August 2, 2026 after AB 853) is a different kind of regime: it focuses more on generative-AI transparency, detection, and watermarking obligations than on deployer liability for consequential decisions. Across the deployer-liability frame, each statute distributes most of the operational obligations to the company using the AI in regulated decisions, not the company that built it.

The architecture changed structurally. The 2024 reading, "we are the user, the vendor is responsible," has been replaced by a regulatory architecture in which the deployer cannot transfer compliance obligations by contract. The deployer can negotiate downstream cost recovery through indemnification. The deployer cannot make the vendor responsible for the deployer's regulatory duty.

What is a "deployer" under AI law

EU AI Act Article 3(4) defines a deployer as "a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity." Plain English: the company or entity that puts the AI to work in its own operations, regardless of who built it. Colorado SB 24-205 mirrors this definition. The four-way distinction is operative because each category carries different obligations.

  • Provider (EU Article 3(3)) and developer (commonly synonymous): the entity that builds, trains, or substantially modifies the AI system. Faces design, testing, transparency, and conformity-assessment obligations.
  • Deployer (EU Article 3(4)): the entity that uses the AI system under its own authority for professional purposes. Colorado uses the same term. Faces operational obligations including human oversight, monitoring, log retention, and consumer disclosure.
  • User: in EU AI Act usage, often refers to the individual end-user of the system. Many older US frameworks conflate "user" with "deployer," which creates legal ambiguity. The 2026 vocabulary is provider/deployer.

Most US mid-market companies are deployers, not developers, even when they fine-tune or customize an existing model. A company is also a developer when its modifications are substantial enough to materially change the AI system's intended purpose under EU Article 25.

The 12-item AI vendor due diligence checklist

Run this checklist before signing any AI vendor contract. Tier 1 items are non-negotiable. Tier 2 items are negotiable based on risk and vendor posture.

Tier 1: non-negotiable

  1. Training data sources and opt-out. Vendor discloses in writing the categories of data used to train the model. Vendor commits in writing not to train on deployer's data without explicit, separately captured opt-in. Default is opt-out.
  2. Data residency, retention, and deletion. Vendor specifies where deployer data is stored, processed, and backed up by region. Vendor commits to deletion within a defined window (30 to 90 days) on contract termination, with written certification.
  3. Tenant isolation guarantees. For multi-tenant SaaS AI: vendor describes the technical mechanism preventing one customer's data from surfacing in another customer's outputs. Logical separation alone is insufficient for high-risk uses.
  4. Audit log access. Deployer receives query-level audit logs sufficient to satisfy EU AI Act Article 26 six-month log retention and to reconstruct any contested AI output for at least 12 months.
  5. DPA and BAA availability. Vendor offers an AI-specific Data Processing Agreement. Healthcare deployers require a Business Associate Agreement (HIPAA). Vanilla DPAs are structurally inadequate for AI-specific risks.
  6. Notification timeline for breaches and security incidents. Vendor commits to notify deployer within 72 hours of becoming aware of a security incident affecting deployer data, regardless of the vendor's own forensic confirmation timeline.

Tier 2: negotiable

  1. Indemnification for IP infringement in outputs. Vendor indemnifies deployer for third-party IP claims arising from AI outputs. Market standard in 2026 is uncapped or carved out of the liability cap. Walk away if the vendor refuses any IP indemnity.
  2. Indemnification for output-caused liability. Vendor indemnifies deployer for harm caused by vendor's failure to meet documented model-quality standards. This is vendor responsibility for vendor's product failure, not vendor responsibility for deployer's misuse.
  3. SOC 2 Type II + ISO 27001 + ISO 42001 certifications. SOC 2 Type II report current within 12 months. ISO 27001 in place for security management. ISO 42001 in place or actively in implementation for AI-specific governance.
  4. Right to terminate on regulatory change. Deployer can terminate without penalty if a new law materially changes the vendor's obligations such that the vendor cannot continue to deliver compliant service within a defined cure window (typically 90 days).
  5. AI-specific liability cap. Negotiate up from the standard "12 months of fees" cap. For high-risk use, target 2 to 3 times annual fees with carve-outs above the cap for IP infringement, data breach, and gross negligence.
  6. Sub-processor list and change notification. Vendor publishes its current sub-processor list (the third parties the vendor uses for hosting, model APIs, embedding services). Vendor provides 30-day advance notice of any sub-processor change with deployer right to object.

5 contract clauses every AI vendor agreement needs

Five clauses to insist on. Template language only. Indiana counsel review required before any contract execution. These are starting positions for negotiation, not final language.

Clause 1: AI Output Liability

Vendor warrants that the AI Service will perform substantially in accordance
with the technical documentation provided to Deployer. Vendor shall defend,
indemnify, and hold Deployer harmless from and against any third-party claims,
losses, damages, fines, and reasonable attorneys' fees arising from
(a) defective AI Outputs that result from Vendor's failure to meet documented
model performance specifications, or
(b) AI Outputs that constitute discrimination on the basis of any protected
characteristic under applicable federal or state law where Vendor controlled
the relevant model parameters.
This indemnification is subject to the liability cap in Section [X] except
for claims arising from Vendor's gross negligence or willful misconduct,
which are uncapped.

Clause 2: Training Data Restriction

Vendor shall not use Deployer Data, including any inputs, prompts, queries,
system instructions, or AI Outputs generated for Deployer, to train, fine-tune,
evaluate, or otherwise improve any AI model offered to any party other than
Deployer, without Deployer's prior, separately captured, written opt-in consent.
Vendor's default product configuration shall not enable such training.
Vendor shall represent in writing the categories of data used to pretrain the
AI Service as of the Effective Date, and shall notify Deployer of any material
change to the pretraining data sources at least sixty (60) days in advance.

Clause 3: Audit and Inspection Rights

Upon thirty (30) days' written notice and no more than once per twelve-month
period (except in response to a security incident or regulatory inquiry),
Deployer or its designated third-party auditor may inspect Vendor's records,
systems, controls, and certifications relevant to the AI Service to confirm
Vendor's compliance with this Agreement, applicable data protection law, and
AI-specific regulatory obligations.
Vendor shall provide query-level audit logs of Deployer's use of the AI Service
for a rolling twelve (12) month period.
Vendor shall maintain SOC 2 Type II reporting current within twelve (12) months
and shall provide the most recent report under standard non-disclosure terms.

Clause 4: Regulatory Change Termination

If a change in applicable law, regulation, or binding regulatory guidance
materially alters Vendor's or Deployer's obligations under this Agreement
such that Vendor cannot continue to deliver the AI Service in a manner that
is compliant for Deployer's intended use, Vendor shall notify Deployer
promptly and use commercially reasonable efforts to remediate within
ninety (90) days.
If remediation is not achieved within that period, Deployer may terminate
the affected portion of the Agreement without further liability and shall be
entitled to a pro-rata refund of any pre-paid fees attributable to the
remaining term.

Clause 5: IP Indemnification (Outputs)

Vendor shall defend, indemnify, and hold Deployer harmless from and against
any third-party claims that Deployer's authorized use of the AI Service or
AI Outputs infringes or misappropriates a third party's patent, copyright,
trademark, or trade secret rights, including reasonable attorneys' fees and
any final award of damages or settlement amount.
This obligation is uncapped and survives termination.
Vendor's obligation does not apply to claims arising from
(a) Deployer's modification of AI Outputs in a manner that introduces the
alleged infringement,
(b) Deployer's combination of AI Outputs with materials not provided or
approved by Vendor where the combination is the basis of the claim, or
(c) Deployer's use of the AI Service after Vendor has provided a non-infringing
replacement and Deployer has elected not to deploy it.

Counsel review required. Specific deal economics, risk tolerance, vendor posture, and applicable law will change the negotiation. Consult Indiana counsel and counsel licensed in any other relevant jurisdiction before relying on any of this language.

Indiana operators: where the exposure concentrates

Indiana's AI vendor exposure picture has four practical dimensions.

Cross-border employment AI. Indiana mid-market companies recruiting from Colorado, Illinois, or California using third-party AI screening tools (Eightfold, Workday's AI features, HireVue, Paradox, others) inherit those states' AI employment regimes. Colorado's SB 24-205 (now in transition under SB 26-189) expressly covered employment AI. Illinois HB 3773 amended the Illinois Human Rights Act effective January 1, 2026, and implementing rules require notice when AI is used to influence or facilitate covered employment decisions. An Indiana company posting a job that a Coloradan or Illinoisan applies to is a deployer under those states' frameworks for that decision.

Healthcare AI under Indiana HB 1271. Signed by Governor Braun on March 4, 2026; effective July 1, 2026. Indiana insurers, HMOs, third-party contractors, and providers using automated tools in claim submission, claim review, downcoding, or adverse determinations need to check whether their workflow satisfies HB 1271's human-review requirements. The bill requires human review of medical records before downcoding and prohibits providers from using automated systems to submit claims without provider review. Affected operators include health plans, TPAs, and provider organizations across Indiana, including Anthem Indiana plus the roughly 190 Indiana-domiciled health plans and TPAs operating in claims workflows. The vendor selection question becomes: does this AI vendor support a documented human-review workflow that satisfies HB 1271's mandate? Many do not.

Extraterritorial EU AI Act exposure. Indiana's tech and SaaS sector (Salesforce's Indianapolis presence, Genesys, Seismic Learning (formerly Lessonly), OneCause, Springbuk, Roundtable Learning, others) sells into EU customers. Any AI feature whose output reaches an EU end-user puts the Indiana company in scope under EU AI Act Article 2. The question is not "do you have an EU office," it is "does your AI output appear in the EU." A non-EU provider of a high-risk AI system must designate an EU-established authorized representative under Article 22; Indiana-based deployers carry the operational obligations under Article 26 once the AI system is in use.

Indiana AG enforcement posture. Indiana Attorney General Todd Rokita joined a 36-state bipartisan coalition in November 2025 opposing federal preemption of state AI laws, signaling that Indiana is not ceding AI enforcement authority. Indiana has not yet brought an AI-specific enforcement action under existing consumer protection or unfair trade practice statutes. Massachusetts ($2.5 million Earnest Operations settlement, July 2025), Texas (multiple AG actions on deceptive AI marketing claims), New York (active investigations into AI-driven hiring), and California (Mobley v. Workday, a federal ADEA collective action in the Northern District of California granted preliminary collective certification on May 16, 2025) have all moved. My read of the pattern: Indiana is positioned to follow rather than lead, but follow nonetheless. The planning input: assume an Indiana AG AI enforcement action arrives in the next 18 to 24 months on a high-visibility consumer harm case (likely healthcare AI under HB 1271 or employment AI). Operate accordingly.

How vendor liability fits the 7-domain governance framework

AI vendor management is Domain 3 of the 7 domains of AI governance. It depends on three other domains being in place first.

  • Domain 1 (Inventory). A company that does not know which AI vendors it uses cannot run vendor due diligence. The shadow AI inventory work is the precondition.
  • Domain 2 (Data Classification). A vendor due diligence question (where does deployer data go?) only has an answer if the deployer knows what data it is sending. The data classification work is the precondition for the residency, retention, and tenant-isolation clauses.
  • Domain 4 (Human Oversight). EU AI Act Article 26 obligation #2 (human oversight) and Indiana HB 1271 (no AI-only adverse determinations) both require operational human-review workflows. The human oversight discipline is what vendor selection has to confirm the vendor product supports.

Run the vendor liability work without the inventory work and the company is negotiating contracts for systems it does not know it has. Run the inventory work without the vendor liability work and the company has a complete list of regulatory exposures with no remediation plan. Both need to move at the same time.

How The 7 Levels of AI Proficiency integrates

Vendor selection and contract negotiation sit at the intersection of organizational compute strategy and cross-functional governance. Two levels of The 7 Levels of AI Proficiency carry the work.

Level 5: Captain. Designs the organization's AI architecture. Decides which categories of AI vendors the company will buy (foundation model APIs, AI-augmented SaaS, vertical-specific agents, voice and document processing). Sets the policy on training-data opt-in, data residency, certification floor, and contract template terms that every vendor selection must clear. The 12-item due diligence checklist above is a Level 5 artifact.

Level 6: Admiral. Runs the cross-functional vendor-management discipline. Coordinates legal review with security review with procurement review with the operating-team's intended-use review. An AI vendor decision in 2026 sits inside a four-function review (legal, security, IT, operating sponsor) before any procurement signature. The Admiral runs that meeting and owns the contract clause negotiations against the vendor's standard paper.

The vendor management discipline is one of the practical surfaces where The 7 Levels of AI Proficiency shows up as observable behavior, not as a self-reported skill. A company at Level 4 or below will sign vendor paper as written. A company at Level 5 or 6 will redline, negotiate, and walk away from inadequate paper. Buyer-side advantage is what the deployer earns by doing this work; it is not a default position.

Where to track

Sources

  1. European Union. "Article 26: Obligations of Deployers of High-Risk AI Systems." EU Artificial Intelligence Act. artificialintelligenceact.eu/article/26
  2. European Union. "Article 99: Penalties." EU Artificial Intelligence Act. artificialintelligenceact.eu/article/99
  3. European Commission. "AI Act Service Desk: Article 26." ai-act-service-desk.ec.europa.eu
  4. Colorado General Assembly. "SB24-205: Consumer Protections for Artificial Intelligence." leg.colorado.gov/bills/sb24-205
  5. National Association of Attorneys General. "A Deep Dive into Colorado's Artificial Intelligence Act." naag.org Colorado AI Act
  6. California Legislative Information. "SB 942: California AI Transparency Act." leginfo.legislature.ca.gov SB 942
  7. Troutman Pepper Locke. "California AI Transparency Act Amendments Signed Into Law." October 2025. troutmanprivacy.com SB 942 amendments
  8. Mass.gov. "AG Campbell Announces $2.5 Million Settlement With Student Loan Lender." July 2025. mass.gov Earnest settlement
  9. BankInfoSecurity. "Court: UnitedHealth Must Answer for AI-Based Claim Denials." March 2026. bankinfosecurity.com UHC discovery order
  10. ArentFox Schiff. "Federal Court Orders Broad Discovery Against UHC in AI Coverage Denial Lawsuit." March 2026. afslaw.com UHC AI discovery
  11. NCSL. "Summary of Artificial Intelligence 2025 Legislation." ncsl.org AI 2025 legislation
  12. Jimerson Birr. "AI Litigation Trends 2025: How to Protect Your Business." December 2025. jimersonfirm.com AI litigation 2025
  13. European Union. "Article 2: Scope." EU Artificial Intelligence Act. artificialintelligenceact.eu/article/2
  14. European Union. "Article 22: Authorised Representatives of Providers of High-Risk AI Systems." EU Artificial Intelligence Act. artificialintelligenceact.eu/article/22
  15. European Union. "Article 25: Responsibilities Along the AI Value Chain." EU Artificial Intelligence Act. artificialintelligenceact.eu/article/25
  16. White & Case. "The EU AI Act's Extraterritorial Scope." whitecase.com EU AI Act scope
  17. Mobley v. Workday, Inc. Case 3:23-cv-00770 (N.D. Cal.). "Order Granting Preliminary Collective Certification" (May 16, 2025). clearinghouse.net Mobley v. Workday
  18. Indiana Governor Mike Braun. "2026 Bill Watch" (HB 1271 signed March 4, 2026). in.gov 2026 Bill Watch
  19. Indiana General Assembly. "HB 1271 (2026): Health Benefit AI Downcoding." iga.in.gov HB 1271
  20. ISO. "ISO/IEC 42001:2023: AI Management Systems." iso.org/standard/42001
  21. NIST. "AI Risk Management Framework (AI 100-1)." nvlpubs.nist.gov NIST AI RMF

Frequently asked questions

Who is liable when an AI vendor's model fails?

The deployer carries the primary legal exposure under the EU AI Act, Colorado SB 24-205, and most state consumer protection statutes. The Massachusetts Attorney General's $2.5 million settlement with Earnest Operations in July 2025 is the working example: Earnest used AI underwriting models the AG alleged produced disparate impact against Black, Hispanic, and non-citizen applicants (via the Cohort Default Rate variable) and auto-denied applicants based on immigration status (via the Knockout Rule). Earnest paid the settlement. Vendor contracts can transfer some downstream costs through indemnification, but they cannot transfer the regulatory obligation to comply.

What is a deployer under the EU AI Act?

EU AI Act Article 3(4) defines a deployer as any natural or legal person, public authority, agency, or other body using an AI system under its authority for professional purposes. In plain terms: the company that puts the AI to work in its operations, regardless of who built or sold it. Article 26 places twelve concrete operational obligations on deployers of high-risk AI systems, including human oversight, input data governance, log retention, monitoring, and worker notification. The high-risk compliance timeline is in transition: EU lawmakers reached a provisional agreement in May 2026 that would defer many standalone high-risk AI obligations to December 2, 2027 and certain embedded high-risk obligations to August 2, 2028. Companies should still design vendor files toward Article 26 now, because procurement and implementation cycles will outlast the delay.

Does the Colorado AI Act apply to my Indiana company?

Possibly yes. Colorado SB 24-205 applies to deployers and developers of high-risk AI systems that affect Colorado residents. An Indiana company recruiting from Colorado, providing financial services to Colorado consumers, or selling healthcare or insurance services in Colorado is in scope when AI is used in those decisions. The original SB 24-205 was scheduled for June 30, 2026 (delayed from February 1, 2026), but enforcement was paused following xAI v. Weiser, and Colorado lawmakers have advanced SB 26-189 as a repeal-and-replace framework focused on automated decision-making technology. The replacement bill moves key obligations to January 1, 2027 if enacted and preserves the right to request meaningful human review and reconsideration after an adverse covered decision.

What should be in an AI vendor contract?

At minimum: a no-train-on-our-data clause, an AI-specific Data Processing Agreement, audit log access for at least 12 months, IP infringement indemnification (uncapped if possible), output liability indemnification for vendor model failures, SOC 2 Type II + ISO 27001 + ISO 42001 certification commitments, regulatory change termination right, sub-processor disclosure with 30-day change notice, and breach notification within 72 hours. Standard SaaS contracts typically lack most of these. Indiana counsel review is required before signature.

Do I need a Data Processing Agreement for ChatGPT Enterprise or similar?

Yes. ChatGPT Enterprise, Claude for Work, Microsoft Copilot, and Google Workspace AI features all process the deployer's data. Each offers an AI-specific or AI-aware DPA. Verify that the DPA addresses training-data restrictions (the consumer-tier defaults often allow training; enterprise tiers typically do not), data residency, retention, and sub-processor disclosure. A DPA written for non-AI SaaS use is structurally inadequate for AI-specific risks.

What AI vendor certifications carry weight?

Three certifications. SOC 2 Type II (operational security controls, audited over a defined period), ISO 27001 (Information Security Management System), and ISO 42001 (the AI Management System standard, published December 2023). The three are complementary, not substitutes. ISO 42001 is becoming a procurement standard reference for AI-management systems certification. Adoption is early; CEOs treating it as a pass/fail vendor filter today are stacking against the timeline of competitor adoption rather than a present-day baseline. A vendor with SOC 2 Type II + ISO 27001 has security; add ISO 42001 for AI-specific governance.

Who enforces AI laws in Indiana?

The Indiana Attorney General has primary enforcement authority for state consumer protection statutes that reach AI-driven harms. Indiana AG Todd Rokita joined a 36-state bipartisan coalition in November 2025 opposing federal preemption of state AI laws, signaling Indiana intends to keep that authority. Indiana has not yet brought a high-profile AI-specific enforcement action, but the precedent across Massachusetts, Texas, New York, and California suggests Indiana action is likely within the next 18 to 24 months on a healthcare AI or employment AI case.

What is the difference between a developer and a deployer?

A developer (sometimes called the provider in EU AI Act terminology) builds, trains, or substantially modifies the AI system. A deployer uses the AI system under its own authority for its own professional purposes. The distinction controls which obligations apply. Providers face design, testing, transparency, and conformity-assessment obligations. Deployers face operational obligations including human oversight, monitoring, log retention, and consumer disclosure. Most US mid-market companies are deployers, not developers, even when they fine-tune or customize an existing model.

Harrison Painter
Harrison Painter
AI Business Strategist. Founder, LaunchReady.ai and AI Law Tracker.

Harrison helps Indiana leaders build AI systems that cut cost and grow revenue. Founder of LaunchReady.ai and the 7 Levels of AI Proficiency framework. Author of You Have Already Been Replaced by AI and The White-Collar Factory is Closing.

Connect on LinkedIn

Track AI legislation as it moves

AI Law Tracker covers every active federal and state AI bill in plain English. Daily updates. Indiana-flagged.

Get the weekly briefing

LaunchReady Indiana delivers AI news, compliance updates, and case studies for Indiana leaders. Every Tuesday. Five minutes.

Subscribe free