Categories
Blog

State AI Laws Are No Longer Background Noise: What Washington and Colorado Mean for Your Compliance Program

If you run a compliance program in 2026, you have a new operational reality: state legislatures are no longer waiting on federal agencies to define the rules of the road for artificial intelligence. They are writing the rules themselves, and they are doing so in ways that address the day-to-day mechanics of product design, customer communications, safety operations, and third-party governance. Two developments illustrate the direction of travel.

First, the state of Washington has been advancing legislation aimed at “companion” style conversational AI, meaning systems designed to sustain ongoing dialogue with users in a way that resembles a relationship rather than a single transaction. These proposals generally focus on transparency, user protection, and special safeguards for minors, including restrictions around sexual content and stronger expectations for detecting and responding to self-harm signals.

Second, Colorado has enacted a broad AI governance framework focused on preventing algorithmic discrimination in high-impact use cases. The details matter, but the theme matters more: organizations that develop or deploy certain AI systems will be expected to show their work through risk management, impact assessments, notices, and documentation that can withstand regulatory scrutiny.

For compliance professionals, the key point is this: these are not “AI policy” conversations. These are operational controls conversations. They will change what your teams build, how they monitor, and how they document decisions.

1. Washington

Companion chatbots move from UX decision to regulated interaction.

Washington’s companion-chatbot approach targets the behavioral reality of these systems. A chatbot that answers a question is one thing. Another is a chatbot designed to keep a user engaged, build intimacy, and act as a persistent presence. When a system is positioned as a “partner” in any form, the risk profile shifts from information quality to user safety, manipulation, dependency, and minors’ exposure. From a compliance standpoint, this is where you should focus:

1. Identity and disclosure are now control requirements, not marketing choices.

If your product presents as conversational, personable, or relationship-like, you should treat “clear disclosure that the user is interacting with AI” as a baseline control. Do not bury it in terms and conditions. Put it in the flow where the user forms expectations.

2. Minor protections move into engineering and content governance.

If you have minor users, or you cannot reliably exclude them, you need controls designed for minors by default. That means age gating where appropriate, content filters tuned for sexual content and grooming patterns, and escalation playbooks for self-harm indicators. It also means you should think about what “engagement optimization” looks like in a relationship-shaped interface. Features that are acceptable in a shopping cart can be unacceptable in a companion dynamic.

3. Self-harm response is an operational readiness question.

If your system can detect self-harm language, you must decide what you will do when you detect it. You need a triage policy, documentation of thresholds, and a human-in-the-loop escalation route when risk is elevated. The compliance failure here is not a false positive. The failure is having no plan, no logging, and no accountable owner when the system raises a signal.

What to do now: create a “companion AI” product classification and require enhanced safeguards if the product meets that definition. That classification step is a compliance control because it forces consistent governance. It prevents the slow drift from “helpful assistant” to “companion” without any risk re-assessment.

2. Colorado

Anti-discrimination AI controls that appear to be a compliance program.

Colorado’s AI governance approach is a preview of what many states may do next: treat AI as a source of civil rights risk and require organizations to demonstrate reasonable care. The thrust is simple: if you use AI in a high-impact context, you should be able to explain how you prevent discriminatory outcomes and monitor for them. Even if you do not operate in Colorado, this framework is a gift to compliance professionals because it translates AI risk into familiar compliance artifacts. Here is how to map it into your program:

1. Define “high-impact” use cases the way you define “high-risk” third parties.

High-impact areas usually include employment, housing, credit, insurance, education, and other contexts where decisions materially affect individuals. Build an inventory. You cannot govern what you do not list. Make the business identify which systems are used for screening, ranking, eligibility, pricing, or access.

2. Require an impact assessment that reads like a control memo.

Your impact assessment should not be a philosophical essay. It should answer concrete questions:

  • What decision does the system influence?
  • What data does it use, and what data does it not use?
  • What bias testing was performed and how often?
  • What performance drift indicators are monitored?
  • What human review exists, and when does it trigger?
  • What is the consumer notice process and the appeal or correction route?

Treat this like any other compliance documentation: consistent format, accountable owner, version control, and retention.

1. Put vendors inside your governance perimeter.

If a vendor supplies the model, you still own the outcome when you deploy it. Require contractual commitments around testing, documentation, model changes, incident notification, and audit rights. If the vendor refuses basic transparency, your risk posture should treat that as a red flag, not a procurement inconvenience.

2. Align to enforcement reality.

In many regulatory regimes, enforcement is driven by documentation and reasonableness. Your program should be able to show a regulator what you did before an incident, not only what you did after a complaint.

III.      The Shared Lesson: AI Governance is Becoming User-Safety Governance

Washington and Colorado might look different, but the compliance lesson is the same: regulators are moving toward protecting individuals from AI-enabled harm, whether that harm is discrimination in consequential decisions or manipulation and exposure risks in relationship-shaped systems. This means your program needs three capabilities:

Capability 1: Inventory with purpose.

Create a single inventory that captures system type, purpose, user population, training, and input data sources, and whether the system affects rights, access, or safety. Assign an owner for each system. An owner is not a team. It is a named person.

Capability 2: Controls embedded in product and operations.

Disclosure is a product control. Age gating is a product control. Self-harm escalation is an operations control. Bias testing is a model governance control. Logging is a forensic control. Compliance must stop treating these as “engineering decisions” and start treating them as “regulatory controls.”

Capability 3: Incident readiness built for AI.

You need a playbook for AI incidents: model drift, unsafe exposure to content, discriminatory outcomes, vendor model changes, prompt injection leading to harmful outputs, and data leakage through conversational interfaces. The playbook should include detection, triage, communications, remediation, and documentation.

A practical checklist you can implement next week

  1. Classify systems into: informational assistant, transactional assistant, companion-style conversational system, and high-impact decision support.
  2. Assign owners and require quarterly attestations for high-impact and companion categories.
  3. Standardize disclosures with a template approved by legal, compliance, and product.
  4. Implement minor safeguards as a default where age cannot be verified with confidence.
  5. Create a self-harm escalation protocol with thresholds, human review steps, and logging requirements.
  6. Bias testing on high-impact systems, document results, and set drift triggers.
  7. Update vendor contracts to require transparency, change-control notifications, and audit support.
  8. Build an AI incident response runbook and conduct a tabletop exercise with product, legal, and customer support teams.

Closing thought

Compliance professionals have been waiting for the “AI rulebook.” The states are writing it in real time. The most effective response is not to wait for perfect clarity. It is to install governance that can scale inventory, document assessments, embed controls, and ensure incident readiness. If you do those four things well, Washington and Colorado will not feel like surprise mandates. They will feel like confirmation that you built the right program early.