Categories
Blog

When Your AI Chat Becomes Exhibit A: What United States v. Heppner Means for Compliance Professionals

There are court rulings that quietly shape doctrine, and others that detonate assumptions. The recent decision of Judge Jed Rakoff from the Southern District of New York in United States v. Heppner falls into the latter category. In a February 10, 2026, ruling,  the Court made clear that the attorney-client privilege or the work-product doctrine did not protect materials generated through a third-party generative AI platform. In plain English, what a defendant typed into a public AI system was discoverable.

For compliance professionals, this is not a narrow litigation footnote. It is a flashing red warning light. The era of casual AI experimentation inside corporations is over. Governance now must catch up with adoption. Today, we will consider the Court’s ruling and why it matters to a Chief Compliance Officer.

The Court’s Core Holding

The defendant in Heppner had used a third-party generative AI tool to draft and refine materials that were later shared with counsel. When prosecutors sought production, the defense argued that these materials were protected by privilege and work-product protections. The court disagreed.

The reasoning was straightforward and, frankly, predictable:

  • The AI tool was not an attorney.
  • The terms of service did not guarantee confidentiality and allowed retention or potential disclosure of inputs.
  • The materials were not prepared at the direction of counsel for the purpose of obtaining legal advice.
  • Simply sending AI-generated drafts to counsel after the fact did not, by itself, retroactively cloak them in privilege.

This is a fundamental point: privilege attaches to communications made in confidence for the purpose of seeking legal advice. When an employee enters sensitive facts into a third-party AI platform that disclaims confidentiality, that “confidence” is at best questionable. When those drafts are created independently of counsel’s direction, work-product arguments grow thin. The court did not create a new doctrine. It applied existing principles to new technology. That is precisely why this ruling is so important.

The Illusion of Confidentiality

Many business users treat AI platforms like a digital notebook. They assume that because the interaction occurs on a screen and feels private, it is private. That assumption is dangerous. Public and consumer AI platforms often reserve the right to store, analyze, or use inputs for service improvement. Even when vendors promise limited retention, those commitments may not meet the strict confidentiality standards necessary to preserve privilege. From a legal perspective, once you introduce a third party without adequate confidentiality protections, you risk waiving your rights.

The compliance lesson is blunt: generative AI is not your lawyer, and it is not your secure internal memo system. This is where governance intersects with culture. If employees are entering investigative summaries, draft responses to regulators, internal audit findings, or potential misconduct narratives into public AI tools, you are manufacturing discoverable evidence. That is not a hypothetical risk. That is now a litigated reality.

Why This Is a Board-Level Issue

The Department of Justice has made clear through the Evaluation of Corporate Compliance Programs (ECCP) that companies must identify and manage emerging risks. Artificial intelligence is no longer emerging. It is embedded in operations, marketing, finance, and legal workflows. The Heppner ruling converts AI usage from a technology convenience into a legal risk category. Boards of Directors should be asking:

  • Do we have an inventory of AI tools used across the enterprise?
  • Are employees permitted to input confidential, regulated, or legally sensitive information into third-party platforms?
  • Have we reviewed the vendor’s terms of service regarding confidentiality, retention, and data ownership?
  • Are legal and compliance functions involved in approving AI deployments?

If the answer to any of these questions is uncertain, there is a governance gap. AI governance is no longer solely about bias, explainability, or regulatory compliance. It is also about preserving privilege, managing litigation risk, and managing evidence.

Privilege cannot Be Recreated After the fact.

One of the most significant aspects of the ruling is the rejection of “retroactive privilege.” Sending AI-generated content to counsel after it is created does not transform it into protected communication. This matters for compliance investigations. Consider the following scenario:

An internal report of potential misconduct surfaces. An employee uses a public AI tool to summarize the facts and generate possible legal arguments before reaching out to in-house counsel. That summary now exists outside any protected legal channel. The vendor may retain it. It may be discoverable.

By the time counsel becomes involved, the privilege damage may already be done. The message for compliance teams is clear: legal engagement must precede, or at least direct, sensitive analysis, not follow it.

Work Product Is Not a Safety Net

Some may argue that AI-assisted drafting in anticipation of litigation should fall under the work-product doctrine. The court in Heppner was not persuaded. Work-product protection generally applies to materials prepared by or for an attorney in anticipation of litigation. When individuals independently generate content using AI tools without counsel’s direction, that protection is far from guaranteed. Compliance professionals should not assume that labeling a document “prepared in anticipation of litigation” will insulate AI-generated material. Courts will look at substance over form.

Practical Steps for Compliance Leaders

This ruling demands operational response from every CCO. Here are some steps every compliance program should consider.

1. Treat Third-Party AI as Non-Confidential by Default

Unless you have a contractual, enterprise-level arrangement with robust confidentiality provisions and clear data controls, assume that information entered into a third-party AI platform is not protected. This default posture should be reflected in policy language.

2. Update Acceptable Use Policies

Your code of conduct and IT policies should explicitly address the use of generative AI. Prohibit the entry of:

  • Privileged communications.
  • Investigation details.
  • Personally identifiable information.
  • Trade secrets.
  • Sensitive regulatory communications.

Policy must move from general warnings to specific examples.

3. Involve Legal in AI Governance

AI procurement should not be a purely IT function. Legal and compliance must review vendor terms, especially around:

  • Data retention.
  • Subprocessor use.
  • Confidentiality obligations.
  • Audit rights.
  • Breach notification.

If you cannot articulate how your AI vendor protects inputs, you cannot defend privilege claims.

4. Implement Training That Reflects Real Risk

Annual compliance training should now include explicit guidance on AI usage. Employees should understand that entering confidential information into public AI tools can waive privilege and render it discoverable. Training should include practical scenarios. The objective is behavioral change, not abstract awareness.

5. Establish Secure AI Environments for Legal Work

If your organization intends to use AI in legal or investigative contexts, consider enterprise solutions that:

  • Operate within your controlled environment.
  • Restrict data sharing.
  • Provide contractual confidentiality.
  • Maintain clear audit logs.

Even then, legal oversight is essential. Secure does not automatically mean privileged.h

6. Align with Litigation Hold Procedures

AI interaction logs may constitute discoverable material. Ensure that your litigation hold processes account for AI-generated content. If your organization logs prompts and outputs, those logs may fall within the scope of preservation obligations. Ignoring this dimension creates spoliation risk.

The Cultural Dimension

Technology adoption inside companies often outruns governance. Employees experiment. Business units optimize. Productivity improves. Compliance arrives later. That sequencing is no longer sustainable. The Heppner ruling should catalyze a shift from reactive to proactive governance. AI usage must be mapped, risk-ranked, and monitored, just as third-party intermediaries, high-risk markets, and financial controls are. If your risk assessment does not explicitly include generative AI, it is incomplete.

Connecting to the DOJ’s Expectations

The DOJ has repeatedly emphasized dynamic risk assessment. Artificial intelligence now clearly falls within the scope of corporate compliance evaluation. Prosecutors will not be sympathetic to arguments that “everyone was using it” or that policies were silent. They will ask:

  • Did the company identify AI as a risk area?
  • Did it implement controls?
  • Did it train employees?
  • Did it monitor usage?
  • Did it respond to incidents?

The answers to those questions will influence charging decisions, resolutions, and penalty calculations.

A Final Word: Convenience Versus Control

Generative AI is transformative. It enhances drafting, analysis, and research. It can elevate compliance operations if deployed thoughtfully. However, convenience without control is exposure. The lesson of United States v. Heppner is not that AI should be avoided. It is that AI must be governed with the same rigor as any other high-impact enterprise tool.

Privilege is fragile. Once waived, it cannot be restored. In a world where a chat prompt can become an exhibit, compliance professionals must lead the charge in redefining responsible AI use. If you are a chief compliance officer, this is your moment. Update your policies. Engage your board. Coordinate with legal and IT. Embed AI governance into your compliance framework. Because the next time an AI conversation surfaces in discovery, you do not want to explain why your program treated it like a harmless experiment.

Categories
AI Today in 5

AI Today in 5: February 19, 2026, The End of Stores Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Will AI shopping agents mean the end of stores? (⁠PYMNTS⁠)
  2. How Internal Audit must respond to the EU AI Act.  (⁠TeamMate)⁠
  3. Re-writing the regulatory risk playbook. (⁠Forbes)⁠
  4. Public-Private Initiative to strengthen risk management for AI. (⁠DOT)⁠
  5. Dr. Oz wants AI avatars for rural healthcare. (⁠NPR)⁠ 

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on ⁠Amazon.com

Categories
Daily Compliance News

Daily Compliance News: February 19, 2026, The Gambler Takes the Stand Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Connected workflows for compliance. (Spark)
  • Commit $600MM in fraud, no worries, just donate to Trump. (NYT)
  • Write the facts, get fired by Trump. (FT)
  • Tom Goldstein takes the stand in his criminal tax trial. (Reuters)
Categories
Blog

Embedded Explainability: Turning Principles into Proof

Embedded explainability is the design choice to build “the why” directly into a system as it operates, rather than bolting on an explanation after the fact. In practical terms, it means the model or decision engine is instrumented to surface the key factors that drove a specific output as the output is delivered. In a compliance, risk, or fraud context, this can include reason codes tied to specific data features, a clear confidence score, the policy or control implicated, and a short narrative that translates technical drivers into business language. The point is not to turn every decision into a science project; the point is to make explanations an always-on product requirement, so investigators, managers, and auditors can quickly understand what the system saw, why it escalated, and what evidence supports the action.

Where this becomes powerful is in governance. Embedded explainability creates a durable audit trail and makes accountability real: you can test whether explanations are consistent over time, whether they drift, whether similarly situated cases are treated consistently, and whether the system is relying on inappropriate proxies. It also reduces the “black box” tax during exams and internal reviews because your documentation is generated continuously, decision by decision, rather than recreated under a deadline. Done well, embedded explainability supports model risk management, accelerates case resolution, and builds user trust because the system does not just tell you what to do. It shows its work in a way that is usable for first-line teams and defensible for second-line and regulators.

If you have been in a single AI governance meeting, you have heard the same reassuring words: transparency, fairness, accountability. They sound good. They also do not answer the one question your Audit Committee will ask you the minute something goes sideways: can you prove what happened, who approved it, and why the system did what it did?

That is the heart of embedded explainability for a GRC or compliance professional. It is not a debate about data science. It is about building a program that can withstand scrutiny. In a strong compliance program, “principles” are not controls. They are intentions. Regulators, prosecutors, and auditors do not award credit for intent. They want evidence of implementation and effectiveness. When you embed explainability, you are building evidence into the workflow itself, so the program produces audit-ready artifacts without heroics.

Think like an auditor, not like a vendor.

In many organizations, “explainability” is treated like a technical deliverable. Someone pulls a chart. Someone cites an algorithm. Everyone nods. Then, the internal audit asks a simple question: “Show me how this use case was approved, how risks were assessed, how testing was performed, and how you monitor it today.”

That is where compliance needs to reframe the conversation. For GRC, the most important explainability is process explainability:

  • Who approved the use case, and what decision impact does it have?
  • What risks were identified, and what mitigations were required?
  • What data and content sources were used, and how they are governed.
  • What testing was done, what thresholds were applied, and what failed.
  • Who monitors the system in production, and how issues get escalated.
  • How changes are controlled, logged, and reapproved

If you can answer those questions with documentation, you can pull on demand; you are not “talking about explainability.” You are demonstrating it.

The risk that hides in plain sight: language and cultural bias

Most compliance teams understand bias as a broad concept. The operational problem manifests in a narrower, more painful way: language and cultural bias within everyday compliance workflows. Consider the real-life places your organization may be using AI or analytics: hotline intake, investigations triage, monitoring and surveillance, third-party diligence, audit planning, policy interpretation, and case summarization. Now add the facts of corporate life: multilingual reporting, non-native English narratives, regional idioms, and different cultural communication styles.

Here is the compliance risk: the system may not be “biased” in a headline-grabbing way. It may be biased in a quiet, compounding way:

  • A hotline narrative written in non-native English is scored lower for credibility.
  • Regional phrasing triggers false positives in monitoring.
  • Direct communication styles are interpreted as “aggressive” or “retaliatory”;
  • Reports from certain geographies are deprioritized because of linguistic patterns; and
  • Summaries strip context from culturally specific descriptions of harm.

This is why embedded explainability matters. If the system cannot tell you why it scored and routed a case the way it did, you will not find these problems until someone outside the company points them out to you.

A compliance-led lifecycle that makes explainability real

The practical move is to treat embedded explainability as a lifecycle requirement, not a go-live checkbox. You want stage gates with documented approvals and an evidence pack that travels with the use case from intake to monitoring. Think of it as the same discipline you already apply to third parties, controls testing, and investigations: define, document, test, approve, monitor, and improve.

A simple compliance-led lifecycle looks like this:

  1. Intake and approval: What is the use case, what is the decision impact, and who is accountable?
  2. Data and language risk assessment: What data is used, what languages and regions are in scope, and what bias risks exist?
  3. Build with traceability: Document the logic, rules, prompts, and human review points.
  4. Testing: Prove the system can be reconstructed and does not degrade across language groups.
  5. Deployment readiness: Confirm monitoring, access controls, logging, and escalation are active.
  6. Ongoing monitoring: Report drift, exceptions, overrides, and bias findings; reapprove material changes.

This is the compliance function earning its keep; not by arguing about definitions, but by building a governance machine that produces defensible evidence.

The minimum evidence pack: what you should be able to pull on demand

If you want to operationalize embedded explainability, standardize the artifacts. Do not let every team reinvent documentation. Your minimum evidence pack should be consistent across machine learning models, rules-based analytics, LLM workflows, and decision engines.

At a minimum, you should be able to produce:

  • Use case charter: purpose, scope, decision impact, owner, risk tier, approvals;
  • Data and language risk assessment: sources, language coverage, cultural risk factors, mitigations;
  • System specification: what it is, how it works, where humans intervene;
  • Testing artifacts: bias test plan, scenario tests, results, remediation notes;
  • Explainability checklist: proof you can reconstruct inputs, steps, outputs, and rationale;
  • Deployment approval record: stage-gate sign-offs and dates;
  • Monitoring and drift reports: trends, exceptions, and escalation notes;
  • Incident and escalation log: root cause, corrective actions, closure dates, and
  • Change management log: what changed, materiality, retesting, reapproval.

If you have this, you have something most organizations still lack: a system of record for AI governance that internal and external auditors can actually test.

The Bottom Line

Embedded explainability is how you turn AI governance from a values statement into a control environment. It is how you protect innovation by making it defensible. If your program can reconstruct decisions, show approvals, demonstrate testing, and document monitoring, you are not hoping you are compliant. You are ready to prove it. 

Categories
Daily Compliance News

Daily Compliance News: February 18, 2026, The Stupid Is as Stupid Does Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Just how big is Ukraine’s corruption problem? (TheIndependent)
  • HB-1 visas and GOP racial hatred. (NYT)
  • More energy investments in Venezuela. (WSJ)
  • The Trump Administration wants history and science removed from federal parks. (Reuters)
Categories
AI Today in 5

AI Today in 5: February 18, 2026, The AI for Rural Healthcare Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. AI to transform fraud investigations. (PRNewswire)
  2. Better defensible AI oversight. (PRNewswire)
  3. What’s in your compliance gap? (Forbes)
  4. Is the AI moment here? (FRSF)
  5. Oz wants AI avatars for rural healthcare. (NPR)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Compliance Into the Weeds

Compliance into the Weeds: Truth Stranger the Fiction: Binance, Iran, Crypto and Compliance

The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to explore it more fully. Looking for some hard-hitting insights on compliance? Look no further than Compliance into the Weeds! In this episode of Compliance into the Weeds, Tom Fox and Matt Kelly look at recent reporting on Binance that raises questions about the effectiveness of its compliance program, monitorships, and executive attitudes toward compliance.

They recap Binance’s 2023 resolution of U.S. criminal and civil matters involving money laundering and sanctions evasion. They discuss the Fortune article, which reported that Binance continued to route funds through its platform to the Iranian government in 2024 and into 2025. They highlight Mr. Zou’s public response on X, suggesting that if investigators found misconduct, it implied they failed to prevent it, which the hosts criticize as a misunderstanding that business units own risk and that compliance’s role is to provide systems, channels, oversight, and escalation rather than “prevent” all misconduct.

Key highlights:

  • Truth Stranger Than Fiction in Compliance
  • Binance’s 2023 Guilty Plea, $4.3B Penalty & Two Monitorships
  • Compliance Team Fallout: Investigators Fired & CCO on the Move
  • ‘If You Found It, You Failed’: Why CEOs Misunderstand Compliance
  • Iran as the Red Line: Plea Agreement Breach, Politics, and Corruption Risk
  • Will Anyone Enforce This? Rule of Law Questions and What Comes Next

Resources:

Matt in Radical Compliance

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

A multi-award-winning podcast, Compliance into the Weeds was most recently honored as one of the Top 25 Regulatory Compliance Podcasts, a Top 10 Business Law Podcast, and a Top 12 Risk Management Podcast. Compliance into the Weeds has been conferred a Davey, a Communicator Award, and a W3 Award, all for podcast excellence.

Categories
The Hill Country Podcast

The Hill Country Podcast: Greg Faldyn: Leadership and Legacy in Rotary

Welcome to the award-winning The Hill Country Podcast. The Texas Hill Country is one of the most beautiful places on earth. In this podcast, Hill Country resident Tom Fox visits with the people and organizations that make this one of the most unique areas of Texas. In this episode, host Tom Fox speaks with Greg Faldyn, a seasoned insurance industry professional and a long-time Rotarian.

Greg, an insurance professional with over 40 years of experience and a dedicated Rotary Club member for nearly 25 years, views the 100th anniversary of Rotary in Kerrville as a landmark achievement in the organization’s enduring commitment to community service. Having played a pivotal role in organizing the celebration as the foundation chair, Greg has been instrumental in highlighting Rotary’s century-long partnerships with key local organizations, such as the Peterson Foundation and the Raphael Clinic. He proudly points to the Hill Country community’s collective resilience, particularly in the wake of events like the July 4th flood, as a testament to Rotary’s strength and impact. Passionate about engaging young professionals, Greg believes that the milestone anniversary serves not only as a celebration of past achievements but also as a call to future service and community enhancement.

Highlights include:

  • Rotary’s Centennial Celebration in Kerrville’s Community
  • Community Support through Rotary Foundation Grants
  • Rotary Club Weekly Engagement
  • Why Join Rotary?

Resources:

Rotary Club of Kerrville

Rotary District 5840

Rotary International

 Other Hill Country Focused Podcasts

Hill Country Authors Podcast

Hill Country Artists Podcast

Texas Hill Country Podcast Network

Cover Art

Nancy Huffman

Categories
Great Women in Compliance

Great Women in Compliance: The New Architecture of Legal and Compliance with AI

In this episode of Great Women in Compliance, Dr. Hemma R. Lomax speaks with Sam Flynn, co-founder of Josef, about the transformation of legal and compliance functions through technology. They discuss the importance of human-centered design, the role of AI in legal architecture, and the need for trust in AI tools. Sam shares his journey from creating Myki Fines to building self-service legal solutions that bridge the access-to-justice gap. The conversation emphasizes the importance of user experience, governance practices, and the need to rethink traditional professional roles in the legal field.

Takeaways:

  • Legal and compliance functions must evolve to be more human-centered.
  • AI can significantly enhance legal decision-making processes.
  • Trust in technology is crucial for successful implementation.
  • User experience should be prioritized in legal tech solutions.
  • Automation can free up valuable time for legal professionals.
  • Access to justice is a critical issue that can be addressed with technology.
  • Rethinking traditional roles in law can lead to better outcomes.
  • Data-driven insights can improve compliance practices.
  • Collaboration between experts and end-users is essential for success.
  • Legal technology should focus on delivering real value to users.

Sound Bites:

  • “AI should unleash human potential.”
  • “Trust is the key to unlocking value.”
  • “We need to build trust in our technology.”

Chapters:

00:00 Introduction to Legal Transformation

02:32 The Journey of Sam Flynn and Mickey Finds

05:30 Rethinking Legal Systems and Design

08:10 Substance Over Form in Legal Processes

10:56 The Role of AI in Legal Architecture

13:39 Building a Legal Front Door

16:24 User Experience in Compliance

18:54 Engagement and Data Utilization

21:56 The Future of Legal Workflows

24:29 Deciding Between Automation and Human Input

26:56 Navigating High-Risk Inquiries

27:50 Strategic Automation for Stakeholder Engagement

28:58 The Importance of Human Expertise in AI

30:57 Transforming Fear into Opportunity with AI

32:59 Building Trustworthy AI in Legal Settings

36:56 Governance Practices for AI Deployment

43:51 Access to Justice: Bridging Gaps with Technology

Guest Biography:

Sam Flynn is the Co-Founder and Chief Operating Officer of Josef, a legal automation platform that empowers legal and compliance teams to create reliable, self-serve tools — no coding required. In his role, Sam leads Josef’s business operations, governance, marketing, and customer success functions, scaling both product impact and organizational trust.

An ex-BigLaw litigator and experienced legal technologist, Sam has long been passionate about using technology to bridge the access-to-justice gap and elevate the delivery of legal services. In 2016, he built Myki Fines, a public-facing legal tech solution that attracted more than 60,000 users in its first month and helped catalyze reforms to unfair laws.

At Josef, Sam combines legal expertise with product and operational leadership to help teams rethink how legal and compliance work gets done — shifting from inbox-driven bottlenecks to strategic architectures that deliver decision-useful guidance at scale. He is a frequent speaker on generative AI in legal, a board member of the Center for Legal Innovation, and an advocate for human-centered legal design.

Categories
Blog

AI and Work Intensification – The Compliance Response

There is a comforting myth circulating in corporate hallways and boardrooms: if we deploy AI across governance, risk, and compliance, the work will shrink. Investigations will move faster. Monitoring will get smarter. Policies will draft themselves. Third-party diligence will become push-button. The compliance function will finally “do more with less.” That myth was challenged in a recent Harvard Business Review article, “AI Doesn’t Reduce Work—It Intensifies It by Aruna Ranganathan and Xingqi Maggie Ye.

The authors believe that what happens is work intensification. AI expands throughput, increases expectations, and generates more outputs that still require human judgment, verification, and accountability. Instead of fewer tasks, you get more tasks. Instead of simpler work, you get faster cycles, more iterations, and new forms of quality risk. For the Chief Compliance Officer (CCO) leading AI governance, this is not a side effect. It is a core operating model issue.

If compliance owns AI governance across the enterprise, compliance must also own the discipline of how humans and AI work together. I call that discipline an AI practice standard, management guidance that sets expectations for pace, quality, verification, escalation, and sustainable workload.

Today, we consider how to consider this issue as a compliance operating model challenge across all GRC workflows: policy management, investigations, hotline intake, monitoring and surveillance, third-party due diligence, regulatory change management, audit planning, training, and reporting. The tone is cautionary because the risk is real: a compliance function that mistakes AI output volume for compliance effectiveness.

The Compliance Operating Model Problem: More Output, More Review, More Risk

Compliance work is not manufacturing. It is judgment work. It requires discretion, context, and defensible decisions. AI can accelerate inputs and draft outputs, but it does not accept responsibility. The CCO does. The business does. The board does. When AI enters GRC workflows, it tends to create four pressure points:

1. Compression of timelines. If a draft can be produced in five minutes, someone will ask why it cannot be finalized in five more.

2. Explosion of options. AI generates multiple versions, scenarios, and recommendations, which expands decision load and review cycles.

3. Higher volume of “signals.” AI-enabled monitoring produces more alerts, more pattern matches, and more anomalies. Much will be noise. All require triage.

4. Illusion of completion. Teams begin to treat a plausible AI answer as a finished work product. That is how quality defects are born.

The result is a compliance function that looks “faster” while becoming more fragile. Burnout rises. Rework increases. Errors creep into documentation. Controls become less reliable because the humans operating them are overwhelmed by the sheer volume AI makes possible.

All this means the question for the CCO is not, “How do we roll out AI?” The question is, “How do we govern the human work that AI intensifies?”

Five KPIs for Work Intensification Risk

Next, we consider five KPIs specifically designed to measure work intensification. These are board-credible, compliance-owned, and operationally measurable.

1. After-Hours Compliance Work Index

Percentage of compliance work activity occurring outside standard business hours (for example, 6 p.m. to 7 a.m.), measured across key systems (case management, GRC platform activity logs, email metadata, collaboration tool usage). This matters because AI compresses timelines and pushes work into nights and weekends. This index serves as an early warning for burnout and quality failures.

2. AI Rework Rate

Percentage of AI-assisted work products requiring material revision after human review (policies, investigation summaries, risk narratives, diligence reports). This matters because

if AI increases speed but doubles rework, you are not gaining productivity. You are shifting effort downstream.

3. Cycle Time Compression vs. Quality Defect Ratio

Track cycle time reductions alongside quality defects (corrections, escalations, documentation gaps, audit findings). You can express this KPI as Cycle Time Improvement / Defect Increase.

This matters because faster is not better if defects rise. This ratio keeps leadership honest.

4. Alert-to-Action Conversion Rate

Percentage of AI-generated alerts that result in a confirmed issue, investigation, remediation, or control enhancement. This matters because AI intensifies monitoring. This KPI exposes whether you are drowning in noise or generating actionable intelligence.

5. Burnout Signal Composite

A quarterly composite score built from pulse surveys such as fatigue, workload, autonomy, attrition in compliance roles, sick leave usage trends, and employee assistance program utilization patterns. This matters because compliance effectiveness depends on people. Burnout is a control failure risk.

These five metrics give the CCO and board a shared view of whether AI is improving the compliance function or simply accelerating it toward exhaustion.

How to Measure the Leading Indicators

You requested practical recommendations for measuring after-hours work, cycle time, quality defects, and burnout indicators. Here is a measurement approach that is realistic and defensible.

After-Hours Work

  • Use system log data from the case management, GRC, and document management platforms to track timestamped activity.
  • Supplement with email and collaboration metadata to measure volume outside standard hours.
  • Report trends by team and workflow, not individuals. This is about operating model health, not surveillance.

Cycle Time

  • Establish “start” and “stop” definitions for each workflow:
    • Investigations: intake date to closure date
    • Due diligence: request date to clearance date
    • Policy updates: drafting starts from the published version
    • Regulatory change: trigger identification to implementation
  • Track AI-assisted versus non-AI-assisted cycle times to isolate the impact.

Quality Defects

  • Define defects as “items requiring material correction after initial completion,” including:
    • Incomplete documentation
    • Wrong risk rating or missing rationale
    • Incorrect regulatory mapping
    • Reopened cases due to insufficient analysis
    • Audit findings tied to workflow execution
  • Capture defects through QA sampling, supervisor review logs, audit results, and post-incident reviews.

Burnout Indicators

  • Run a quarterly pulse survey with 5–7 questions on workload, pace, clarity, and ability to disconnect.
  • Track voluntary attrition and vacancy duration for compliance roles.
  • Include aggregate HR indicators such as overtime trends or sick leave usage, where available.
  • Use a composite score and trend it. The trend line is what matters.

The key is to build instrumentation without creating a culture of monitoring employees. Your goal is not to watch people. Your goal is to protect the control environment.

Adopt an Enterprise AI Practice Standard Now

For an innovation-forward company, the right move is not to slow down. The right move is to govern how you speed up. Your call to action is simple and strong: to adopt an enterprise AI practice standard as management guidance, owned by Compliance, implemented across all GRC workflows, measured by five work-intensification KPIs, and tested by internal audit and red teaming.

If you do that, you gain three things immediately:

1. A sustainable operating model

2. Defensible governance for regulators and boards

3. A compliance function that remains credible under pressure

AI can make compliance better. But only if the humans who run compliance can still breathe.