Categories
Blog

State AI Laws Are No Longer Background Noise: What Washington and Colorado Mean for Your Compliance Program

If you run a compliance program in 2026, you have a new operational reality: state legislatures are no longer waiting on federal agencies to define the rules of the road for artificial intelligence. They are writing the rules themselves, and they are doing so in ways that address the day-to-day mechanics of product design, customer communications, safety operations, and third-party governance. Two developments illustrate the direction of travel.

First, the state of Washington has been advancing legislation aimed at “companion” style conversational AI, meaning systems designed to sustain ongoing dialogue with users in a way that resembles a relationship rather than a single transaction. These proposals generally focus on transparency, user protection, and special safeguards for minors, including restrictions around sexual content and stronger expectations for detecting and responding to self-harm signals.

Second, Colorado has enacted a broad AI governance framework focused on preventing algorithmic discrimination in high-impact use cases. The details matter, but the theme matters more: organizations that develop or deploy certain AI systems will be expected to show their work through risk management, impact assessments, notices, and documentation that can withstand regulatory scrutiny.

For compliance professionals, the key point is this: these are not “AI policy” conversations. These are operational controls conversations. They will change what your teams build, how they monitor, and how they document decisions.

1. Washington

Companion chatbots move from UX decision to regulated interaction.

Washington’s companion-chatbot approach targets the behavioral reality of these systems. A chatbot that answers a question is one thing. Another is a chatbot designed to keep a user engaged, build intimacy, and act as a persistent presence. When a system is positioned as a “partner” in any form, the risk profile shifts from information quality to user safety, manipulation, dependency, and minors’ exposure. From a compliance standpoint, this is where you should focus:

1. Identity and disclosure are now control requirements, not marketing choices.

If your product presents as conversational, personable, or relationship-like, you should treat “clear disclosure that the user is interacting with AI” as a baseline control. Do not bury it in terms and conditions. Put it in the flow where the user forms expectations.

2. Minor protections move into engineering and content governance.

If you have minor users, or you cannot reliably exclude them, you need controls designed for minors by default. That means age gating where appropriate, content filters tuned for sexual content and grooming patterns, and escalation playbooks for self-harm indicators. It also means you should think about what “engagement optimization” looks like in a relationship-shaped interface. Features that are acceptable in a shopping cart can be unacceptable in a companion dynamic.

3. Self-harm response is an operational readiness question.

If your system can detect self-harm language, you must decide what you will do when you detect it. You need a triage policy, documentation of thresholds, and a human-in-the-loop escalation route when risk is elevated. The compliance failure here is not a false positive. The failure is having no plan, no logging, and no accountable owner when the system raises a signal.

What to do now: create a “companion AI” product classification and require enhanced safeguards if the product meets that definition. That classification step is a compliance control because it forces consistent governance. It prevents the slow drift from “helpful assistant” to “companion” without any risk re-assessment.

2. Colorado

Anti-discrimination AI controls that appear to be a compliance program.

Colorado’s AI governance approach is a preview of what many states may do next: treat AI as a source of civil rights risk and require organizations to demonstrate reasonable care. The thrust is simple: if you use AI in a high-impact context, you should be able to explain how you prevent discriminatory outcomes and monitor for them. Even if you do not operate in Colorado, this framework is a gift to compliance professionals because it translates AI risk into familiar compliance artifacts. Here is how to map it into your program:

1. Define “high-impact” use cases the way you define “high-risk” third parties.

High-impact areas usually include employment, housing, credit, insurance, education, and other contexts where decisions materially affect individuals. Build an inventory. You cannot govern what you do not list. Make the business identify which systems are used for screening, ranking, eligibility, pricing, or access.

2. Require an impact assessment that reads like a control memo.

Your impact assessment should not be a philosophical essay. It should answer concrete questions:

  • What decision does the system influence?
  • What data does it use, and what data does it not use?
  • What bias testing was performed and how often?
  • What performance drift indicators are monitored?
  • What human review exists, and when does it trigger?
  • What is the consumer notice process and the appeal or correction route?

Treat this like any other compliance documentation: consistent format, accountable owner, version control, and retention.

1. Put vendors inside your governance perimeter.

If a vendor supplies the model, you still own the outcome when you deploy it. Require contractual commitments around testing, documentation, model changes, incident notification, and audit rights. If the vendor refuses basic transparency, your risk posture should treat that as a red flag, not a procurement inconvenience.

2. Align to enforcement reality.

In many regulatory regimes, enforcement is driven by documentation and reasonableness. Your program should be able to show a regulator what you did before an incident, not only what you did after a complaint.

III.      The Shared Lesson: AI Governance is Becoming User-Safety Governance

Washington and Colorado might look different, but the compliance lesson is the same: regulators are moving toward protecting individuals from AI-enabled harm, whether that harm is discrimination in consequential decisions or manipulation and exposure risks in relationship-shaped systems. This means your program needs three capabilities:

Capability 1: Inventory with purpose.

Create a single inventory that captures system type, purpose, user population, training, and input data sources, and whether the system affects rights, access, or safety. Assign an owner for each system. An owner is not a team. It is a named person.

Capability 2: Controls embedded in product and operations.

Disclosure is a product control. Age gating is a product control. Self-harm escalation is an operations control. Bias testing is a model governance control. Logging is a forensic control. Compliance must stop treating these as “engineering decisions” and start treating them as “regulatory controls.”

Capability 3: Incident readiness built for AI.

You need a playbook for AI incidents: model drift, unsafe exposure to content, discriminatory outcomes, vendor model changes, prompt injection leading to harmful outputs, and data leakage through conversational interfaces. The playbook should include detection, triage, communications, remediation, and documentation.

A practical checklist you can implement next week

  1. Classify systems into: informational assistant, transactional assistant, companion-style conversational system, and high-impact decision support.
  2. Assign owners and require quarterly attestations for high-impact and companion categories.
  3. Standardize disclosures with a template approved by legal, compliance, and product.
  4. Implement minor safeguards as a default where age cannot be verified with confidence.
  5. Create a self-harm escalation protocol with thresholds, human review steps, and logging requirements.
  6. Bias testing on high-impact systems, document results, and set drift triggers.
  7. Update vendor contracts to require transparency, change-control notifications, and audit support.
  8. Build an AI incident response runbook and conduct a tabletop exercise with product, legal, and customer support teams.

Closing thought

Compliance professionals have been waiting for the “AI rulebook.” The states are writing it in real time. The most effective response is not to wait for perfect clarity. It is to install governance that can scale inventory, document assessments, embed controls, and ensure incident readiness. If you do those four things well, Washington and Colorado will not feel like surprise mandates. They will feel like confirmation that you built the right program early.

Categories
AI Today in 5

AI Today in 5: March 3, 2026, The First AI Agent Payment Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Rethinking the financial crime stack through AI. (FinTechGlobal)
  2. CA AG opposes Trump Administration attempts to gut AI law. (CA DOJ)
  3. First AI agent payment. (FinTechMagazine)
  4. Automating reg compliance with AI. (Bits&Chips)
  5. FCA review of AI in the UK financial sector. (GlobalComplianceNews)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
AI Today in 5

AI Today in 5: March 2, 2026, The Silent Failure at Scale Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. AI rewriting compliance governance. (FinTechGlobal)
  2. Where AI, Security, and Compliance Meet. (CyberMagazine)
  3. Limits of voluntary AI Bill of Rights. (SLS)
  4. The biggest risk for businesses and AI. (CNBC)
  5. New Spanish DPA. (GlobalComplianceNews)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
From the Editor's Desk

From The Editor’s Desk: Episode 37: Season 2 – Reflections from February and Insights into March for Compliance Week

In this episode of ‘From the Editor’s Desk,’ Tom Fox visits with Aaron Nicodemus to discuss highlights from Compliance Week in January and February and take a look at what is coming down the pike in March, including the upcoming “Inside the Mind of the CCO” survey. They also begin to preview the 2026 National Conference in May.

Key highlights:

  • February Story Roundup
  • March AI Coverage Plans
  • CCO Survey Early Findings
  • Long Form Investigations Ahead
  • AI Governance Reality Check
  • TPRM Conference Teaser

Resources:

Aaron Nicodemus on LinkedIn

Compliance Week

Categories
AI Today in 5

AI Today in 5: February 27, 2026, The Have It Your (AI) Way at BK Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Monitoring AI comms for forensic compliance. (FinTechGlobal)
  2. Pairing AI Voice Compliance with other types of Compliance. (UCToday)
  3. Banks are using AI to flag suspicious trades. (Bloomberg)
  4. A faster Nano Banana. (Bloomberg)
  5. BK uses AI to monitor employees’ friendliness. (Yahoo!)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
2 Gurus Talk Compliance

2 Gurus Talk Compliance – Episode 71 – The Dog Bite Edition

What happens when two top compliance commentators get together? They talk compliance, of course. Join Tom Fox and Kristy Grant-Hart in 2 Gurus Talk Compliance as they discuss the latest compliance issues in this week’s episode!

Stories this week include:

  • The Sony Hack and the consequences of a bad decision. (WSJ)
  • What CEOs are most worried about. (NYT)
  • The dog bite defense fails as a former coal executive is convicted of FCPA violations. (Law360)
  • A KPMG partner was fired for using AI to cheat on a test about AI. (FT)
  • What is compliance reconciliation? (FinTechGlobal)
  • Terrorists: What Is the Risk Landscape for Multinationals Operating in Mexico? – (Corporate Compliance Insights)
  • Messy Retaliation Allegations at Binance – (Radical Compliance)
  • The Many Risks of Mandating Employee AI Usage – (Radical Compliance)
  • Workers Are Afraid AI Will Take Their Jobs. They’re Missing the Bigger Danger – (WSJ)
  • BODYCAM: Florida man arrested after bizarre forklift and ATM joyride through streets – (CBS 12)

Resources:

Kristy Grant-Hart on LinkedIn

Prove Your Worth

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Daily Compliance News

Daily Compliance News: February 26, 2026, The Why So Few Women CEOs Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • What happens when companies demand that employees use AI? (WSJ)
  • Why so few women CEOs? (FT)
  • eBay finally settles Steiner harassment suit. (Reuters)
  • Alfred Sloan and objective organizations. (Bloomberg)
Categories
AI Today in 5

AI Today in 5: February 26, 2026, The Use AI or Lose Your Job Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Treasury issues AI risks and compliance tools for financial services. (WVNS)
  2. EU AI Act enforcement begins. (DigWatch)
  3. Human in the Loop is needed for AI in healthcare. (HealthcareITNews)
  4. What happens when companies demand that employees use AI? (WSJ)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
AI Today in 5

AI Today in 5: February 25, 2026, The Spotting AI Fakes Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. No code AML. (FinTechGlobal)
  2. Applying AI in sanctions compliance. (FTI)
  3. AI agents for investment banking and HR. (Bloomberg)
  4. 4 AI strategies for healthcare. (Forbes)
  5. Tools to spot AI fakes. (NYT)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Blog

When AI Incidents Collide with Disclosure Law: A Unified Playbook for Compliance Leaders

There was a time when the risk of artificial intelligence could be discussed as a forward-looking innovation issue. That time has passed. AI governance now sits squarely at the intersection of operational risk, regulatory enforcement, and securities disclosure. For compliance professionals, the question is no longer whether AI risk will mature into a board-level issue. It already has.

If your organization deploys high-risk AI systems in the European Union, you face post-market monitoring and serious incident reporting obligations under the EU AI Act. If you are a U.S. issuer, you face potential Form 8-K disclosure obligations under Item 1.05 when a cybersecurity incident becomes material. Add the NIST AI Risk Management Framework for severity evaluation, ISO 42001 governance expectations for evidence and documentation, and the compliance function, which stands at the crossroads of law, technology, and investor transparency.

The challenge is not understanding each framework individually. The challenge is integrating them into one operational escalation model. Today, we consider what that means for the Chief Compliance Officer.

The EU AI Act: Post-Market Monitoring Is Not Optional

The EU AI Act requires providers of high-risk AI systems to implement post-market monitoring systems. This is not a paper exercise. It requires structured, ongoing collection and analysis of performance data, including risks to health, safety, and fundamental rights. Where a “serious incident” occurs, providers must notify the relevant national market surveillance authority without undue delay. A serious incident includes events that result in death, serious harm to health, or a significant infringement of fundamental rights. The obligation is proactive and regulator-facing. Silence is not an option.

This means that if your AI-enabled hiring tool systematically discriminates, or your AI-driven medical device produces dangerous outputs, you may face mandatory reporting obligations in Europe even before your legal team finishes debating causation. The compliance implication is straightforward: you need an operational definition of “serious incident” embedded inside your incident response process. Waiting to interpret the statute after the event is not governance. It is risk exposure.

SEC Item .05: The Four-Business-Day Clock

Across the Atlantic, the Securities and Exchange Commission (SEC) has made its expectations equally clear. Item 1.05 of Form 8-K requires disclosure of material cybersecurity incidents within four business days after the registrant determines the incident is material. Here is where compliance professionals must lean forward: AI incidents can trigger cybersecurity implications. Data exfiltration through model vulnerabilities, adversarial manipulation of training data, or unauthorized system access to AI infrastructure may constitute cybersecurity incidents.

The clock does not start when the breach occurs. It starts when the company determines materiality. That determination must be documented, defensible, and timestamped. If your AI governance framework does not feed into your materiality assessment process, you have a structural weakness. Compliance must ensure that AI incident severity assessments are directly connected to the legal determination of materiality. The board will ask one question: When did you know, and what did you do? You must have an answer supported by contemporaneous documentation.

NIST AI RF: Speaking the Language of Severity

The NIST AI Risk Management Framework provides the operational vocabulary compliance teams need. Govern, Map, Measure, and Manage are not theoretical constructs. They form the backbone of defensible severity assessment. When an AI incident arises, you must evaluate:

  • Scope of affected stakeholders
  • Magnitude of operational disruption
  • Likelihood of recurrence
  • Financial exposure
  • Reputational harm

This impact-likelihood matrix is what transforms noise into signal. It allows the organization to distinguish between model drift requiring retraining and systemic failure requiring regulatory notification. Importantly, severity classification must not be left solely to engineering teams. Compliance, legal, and risk must participate in the evaluation. A purely technical assessment may underestimate regulatory or investor impact.

If the NIST severity rating is high-impact and high-likelihood, escalation must be automatic. There should be no debate about whether the issue reaches executive leadership. Governance means predetermined thresholds, not ad hoc discussions.

ISO 42001: If It Is Not Logged, It Did Not Happen

ISO 42001, the emerging AI management system standard, adds another layer of discipline: documentation. It requires structured governance, defined roles, documented controls, and demonstrable evidence of monitoring and incident handling. For compliance professionals, this is where audit readiness becomes real. When regulators ask for logs, you must produce:

  • Model version identifiers
  • Training data provenance
  • Decision traces and outputs
  • Operator interventions
  • Access logs and export records
  • Timestamps and system configurations

In other words, you need a chain of custody for AI decision-making. Without logging discipline, you will not survive regulatory scrutiny. Worse, you will not survive shareholder litigation. ISO 42001 forces organizations to treat AI systems with the same governance rigor as financial controls under SOX. That alignment should not surprise anyone. Both concern trust in automated decision systems.

One Incident, Multiple Obligations

Consider a practical scenario. A vulnerability in a third-party model component has compromised your AI-driven customer analytics platform. Sensitive customer data is exposed. The compromised system also produced biased credit scores during the attack window. You now face:

  • Potential serious incident reporting under the EU AI Act
  • Cybersecurity disclosure analysis under SEC Item 1.05
  • Data protection obligations under GDPR
  • Internal audit review of governance controls
  • Reputational fallout

If your organization handles each of these as separate tracks, you will lose time and coherence. Instead, you need a unified incident command structure with embedded regulatory triggers. As soon as the issue is identified, you preserve logs. Within 24 hours, severity scoring occurs under NIST criteria. Within 48 hours, the legal team evaluates materiality. By 72 hours, the evidence packet is assembled for board review. The board should receive:

  • Incident timeline
  • Severity classification
  • Regulatory reporting analysis
  • Financial exposure estimate
  • Remediation plan

This is not overkill. This is operational discipline.

The Board’s Oversight Obligation

Boards are increasingly being asked about AI governance. Institutional investors want transparency. Regulators want accountability. Plaintiffs’ lawyers want leverage. Directors should demand:

  1. Clear definitions of serious AI incidents.
  2. Pre-established escalation thresholds.
  3. Integrated disclosure decision protocols.
  4. Evidence preservation policies aligned with ISO standards.
  5. Regular tabletop exercises involving AI scenarios.

If your board has not run an AI incident simulation that includes SEC disclosure timing and EU reporting triggers, it is time to schedule one. Calm leadership during a crisis does not happen spontaneously. It is built through preparation.

The CCO’s Moment

This convergence of AI regulation and securities disclosure creates an opportunity for compliance professionals. The CCO can position the compliance function as the integrator between engineering, legal, cybersecurity, and investor relations. That requires proactive steps:

  • Embed AI into enterprise risk assessments.
  • Update incident response playbooks to include AI-specific triggers.
  • Align AI logging architecture with evidentiary standards.
  • Train leadership on materiality determination for AI incidents.
  • Report AI governance metrics to the board quarterly.

The compliance function should not be reacting to AI innovation. It should be shaping its governance architecture.

Governance Is Strategy

Too many organizations treat AI governance as defensive compliance. That mindset is outdated. Effective governance builds trust. Trust drives adoption. Adoption drives competitive advantage.

A well-documented post-market monitoring system demonstrates operational maturity. A disciplined severity assessment process demonstrates strong internal control. Transparent disclosure builds investor confidence. Conversely, fragmented incident handling erodes credibility. The market will reward companies that demonstrate responsible AI oversight. Regulators will scrutinize those who do not.

Conclusion: Integration Is the Answer

The EU AI Act, SEC Item 1.05, NIST AI RMF, and ISO 42001 are not competing frameworks. They are complementary lenses on the same reality: AI systems create risk that must be monitored, measured, disclosed, and documented.

Compliance leaders who integrate these frameworks into a single escalation and reporting architecture will protect their organizations. Those who treat them as separate checklists will struggle. AI risk is no longer hypothetical. It is operational, regulatory, and financial. The compliance function must be ready before the next incident occurs. Because when it does, the clock will already be ticking.