Categories
Blog

Texas Steps Into the AI Ring: What a “Responsible AI Governance Act” Means for Companies

Contrary to the standard belief and even Governor Abbott’s pronouncements, there is some regulation in the great state of Texas. With the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), Texas made a clear statement: artificial intelligence is no longer just a product feature or a data science experiment. It is a regulated business risk. If your organization builds, buys, deploys, or relies on AI to make decisions about people, Texas is signaling that you should be able to explain what the system does, prove you are not using it in harmful ways, and demonstrate governance over it.

Based on your summary, the Texas Responsible Artificial Intelligence Governance Act creates a statewide framework with four big pillars: (1) prohibitions on certain harmful or discriminatory uses, (2) limits on biometric surveillance, (3) disclosure requirements in defined contexts, (4) oversight infrastructure, including a regulatory sandbox, and (5) enforcement with noted safe harbors. That is not “innovation-killing.” It is Texas doing what Texas does: setting boundaries on unacceptable conduct while leaving room for businesses to move fast within guardrails.

Today, we begin a two-part look at state regulation of AI. Today in Part 1, we consider the Texas approach. Tomorrow in Part 2, we review the federal attempt to eviscerate all state AI regulation, claiming federal preemption through the Trump Administration’s sweeping Executive Order titled “Ensuring a National Policy Framework for Artificial Intelligence.”

1. Prohibited Uses: Drawing Hard Lines Around Harm and Discrimination

The most important practical takeaway for a corporate audience is this: Texas is moving toward outcome-focused restrictions, not just paperwork. When a law prohibits “harmful or discriminatory uses,” the question becomes: harmful to whom, and in what context? For most companies, the risk zones are predictable:

  • Employment: recruiting, resume screening, interview scoring, promotion, performance evaluation, and workforce reduction.
  • Credit and financial decisions: underwriting, pricing, and fraud flags that drive adverse decisions.
  • Housing and insurance: eligibility, pricing, and claims triage.
  • Customer access: KYC onboarding, account shutdowns, and refund decisions.
  • Public-facing services: education, health-related triage, and benefits navigation.

From a compliance program perspective, this pushes you toward two controls you should already want:

• A documented AI use-case inventory, categorized by impact level.

• A discrimination and fairness control, meaning pre-deployment testing plus monitoring, and a mechanism to remediate.

If you are thinking, “We do not use AI for those decisions,” the next question is whether the vendor tool uses AI under the hood. Texas-style statutes tend to treat “deployment” broadly, and regulators are rarely impressed by “the vendor did it” as a defense.

2. Biometric Surveillance: The Texas Red Line

You mentioned restrictions on “unauthorized biometric surveillance.” In plain English, that means the law is likely concerned with face recognition, voiceprints, gait recognition, and other identifiers used to track or identify people.

Corporate implications typically fall into three areas:

  • Physical security: access control systems, visitor management, and camera analytics.
  • Retail and venues: loss prevention, “known offender” lists, and customer behavior analytics.
  • Workplace monitoring: time clocks using facial recognition and productivity monitoring that drifts into biometrics.

If you use biometric tools, your governance should address:

  • Lawful basis and authorization—consent, notice, contractual, and policy controls.
  • Purpose limitation—what it is used for and what it is not used for.
  • Retention and deletion—biometric data cannot be a forever asset.
  • Vendor constraints—no secondary use, no model training on your biometric data unless explicitly approved.

Even if Texas is not your primary market, this is the type of requirement that quickly becomes “lowest common denominator” compliance across a multi-state footprint.

3. Disclosure: The Practical “Tell the Truth” Requirement

You flagged “clear AI disclosures in some contexts.” For corporate teams, disclosure obligations usually arise when AI materially interacts with a person or influences a decision that affects them.

Think of disclosure as a three-part discipline:

  • When you disclose: at the point of interaction or decision.
  • What you disclose: that AI is used, what it is used for, and how a person can seek assistance or appeal.
  • How you disclose: clear, conspicuous, and not buried in terms and conditions.

The compliance opportunity here is that disclosure forces operational clarity. If you cannot describe the system in plain language, you almost certainly do not have adequate control over it.

4. Oversight and a Regulatory Sandbox: “Governance With a Business On-Ramp”

A state oversight body, along with a “sandbox” approach, signals that Texas wants responsible experimentation. Done right, a sandbox creates a controlled pathway to test higher-risk systems with agreed guardrails, transparency, and reporting.

For companies, the sandbox concept maps to an internal capability you should build anyway:

  • Pilot governance: criteria for what can be tested, where, with whom, and with what monitoring.
  • Kill switches: the ability to stop or roll back quickly.
  • Post-pilot review: documented lessons learned before scaling.

This is compliance that enables innovation, not blocks it.

5. Enforcement: Centralized, Cure-Oriented, and Compliance-Friendly

Enforcement authority under the Texas Responsible Artificial Intelligence Governance Act is deliberately centralized in the Texas Attorney General’s office. That decision matters. By excluding a private right of action, the statute avoids the litigation-driven compliance chaos that has plagued other regulatory regimes. Instead of trial lawyers driving outcomes, Texas has opted for a single, accountable enforcement authority with discretion, consistency, and an institutional understanding of regulatory tradeoffs.

Equally important is the statute’s 60-day cure period. This provision reflects a mature regulatory philosophy: most compliance failures in emerging technologies are not rooted in bad intent but in complexity, novelty, and rapid innovation cycles. The law gives companies the opportunity to remediate, document corrective action, and improve governance before penalties attach. That is precisely how effective compliance programs are built.

The explicit safe harbor for organizations aligned with recognized frameworks such as the NIST AI Risk Management Framework or ISO/IEC 42001 further reinforces this approach. Texas is not inventing bespoke standards in isolation. It is rewarding companies that invest in globally recognized, risk-based governance systems.

This is not a punitive regulation designed to extract fines or score political points. It is a regulatory governance intended to incentivize foresight, structure, and accountability. For compliance professionals, that is the right signal at exactly the right moment.

Join us tomorrow as we consider what the attempted federal preemption via Executive Order might mean for Texas and other states.