Categories
AI Today in 5

AI Today in 5: April 2, 2026, The Just Say No Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Responsible AI in the regulatory framework. (Wealth Management)
  2. HHS moves AI in healthcare oversight. (GovInfo Security)
  3. Creating an AI Incident and Response Plan. (NationalReview)
  4. Where is AI in healthcare headed? (Futurism)
  5. Saying No in GenAI projects. (FinTechGlobal)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
AI Today in 5

AI Today in 5: March 27, 2026, The No to AI Data Centers Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Customer service AI improving fintech. (Global Banking & Finance)
  2. GenAI for healthcare. (The Hastings Center)
  3. Local opposition is slowing data center construction. (NYT)
  4. Corporate AI adoption outpacing compliance. (The Global Legal Post)
  5. Agentic AI transforming compliance ROI. (FinTechGlobal)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
AI in Healthcare

AI in Healthcare: Five Healthcare AI Stories You Need to Know This Week – March 27, 2026

Welcome to AI in Healthcare in 5 Stories. This podcast is a Weekly Briefing of the five most important AI developments shaping healthcare, medicine, and life sciences. Each week, Tom Fox breaks down the latest stories in clinical innovation, regulation, privacy, compliance, patient safety, and operational transformation through a practical, business-focused lens. Designed for healthcare compliance professionals, executives, legal teams, clinicians, and industry leaders, the podcast moves beyond headlines to explain what each development means in the real world.

The top five stories for the week ending March 27, 2026, include:

  1. GenAI for healthcare. (The Hastings Center)
  2. Responsible AI in healthcare. (Cisco)
  3. How Oracle is transforming healthcare. (CloudWars)
  4. 1in 3 adults is using chatbots for healthcare. (ModernHealthcare)
  5. AI in healthcare administration. (The AI Journal)

For more information on the use of AI in Compliance programs, Tom Fox’s new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Blog

AI Is Only as Good as the Data: What Compliance Leaders Need to Know About Data Readiness

There is an old lesson in compliance that remains evergreen: bad facts produce bad decisions. The same is true for data science: Garbage In, Garbage Out (GIGO). In the GenAI era, that lesson has a new twist. Bad data produces bad outputs at machine speed.

That is why the report, Taming the Complexity of AI Data Readiness, deserves the attention of every Chief Compliance Officer, compliance technologist, and board member who asks management, “What is our AI strategy?” The better follow-up question is, “What is our data readiness strategy?” Because the report makes one point with unmistakable clarity: the model is not the mission; the data foundation is.

For compliance professionals, this is not a technical side issue. It is central to the enterprise risk conversation. If your organization is training, testing, or deploying AI on messy, siloed, biased, stale, or poorly governed data, you are not building a competitive advantage. You are an industrializing risk.

The Dirty Little Secret of Enterprise AI

The report lays out a reality that will not surprise anyone who has lived through a data initiative. Most organizations are not ready. Only 7% of survey respondents said their company’s data was completely ready for AI adoption. By contrast, 51% said it was only somewhat ready, while 27% said it was not very or not at all ready. Only 42% said their organization had high trust in its AI data, and 73% agreed their company should prioritize AI data quality more than it currently does. That should give every compliance officer pause.

We are living through a corporate rush toward GenAI, yet most companies are still stuck at the same old starting line: fragmented, inconsistent, poorly governed data. Many AI conversations inside companies still begin with use cases, copilots, and vendor demos. Far fewer begin with data lineage, data permissions, data quality, or governance maturity. That is a mistake.

If the underlying data is unreliable, the downstream output will be unreliable as well. Worse, it may arrive dressed up in polished prose, persuasive charts, or tidy summaries that create a false sense of confidence. In compliance with that, it is especially dangerous. Whether the use case is sanctions screening, due diligence, internal investigations, policy management, financial controls, or regulatory reporting, a bad answer delivered quickly is still a bad answer.

Bad Data Is Not Just a Tech Problem

One of the most useful parts of the report is how it frames the core barriers. The top challenge cited by respondents was siloed data and difficulty integrating sources at 56%. After that, a lack of a clear data strategy ranked 44%, and data quality or bias issues ranked 41%. Other concerns included regulatory constraints on data use, unclear data lineage, inadequate security, and outdated data. Every one of those should sound familiar to compliance professionals.

Siloed data means incomplete visibility. Weak lineage means you may not be able to defend how an answer was generated. Bias in the data means distorted outputs. Outdated data means inaccurate decisions. Weak security exposes sensitive information. Regulatory constraints mean the company may not even have the right to use certain data the way its AI aspirations assume.

The report underscores this point. 52% of respondents identified inaccurate or biased AI results as a top concern, while 40% cited the loss of security or intellectual property. That is not abstract. That is the modern compliance risk register.

Can We Trust the Data?

A quote from Teresa Tung of Accenture in the report is worth lingering over. She said data readiness means “you can access data to see an accurate view of what is happening in your business and what you can do about it.” That is also a very good working definition of compliance intelligence.

A mature compliance program helps a company understand what is happening inside the business and what should be done in response. That means your hotline data, your gifts and entertainment data, your training metrics, your third-party files, your investigation records, and your control data all need to mean what you think they mean.

The report makes this point with a simple example. Price data is not useful unless you know whether it is in U.S. or Australian dollars, whether it is a unit or bulk price, and when it applies. The compliance equivalent is easy to imagine. A third-party risk flag is not useful unless you know what triggered it, what jurisdiction it covers, how recently it was refreshed, what source produced it, and whether anyone validated it. Context is a control. Without it, data can mislead just as easily as it can inform.

Why This Is Becoming a Board-Level Issue

Another important finding is that only 23% of organizations have created a data strategy for AI adoption, although 53% are currently developing one. In other words, companies know they have a problem, but most are still working through it. This is where compliance can truly function as a business enabler.

The best compliance leaders know that governance is not the enemy of innovation. Governance is what makes innovation scalable and sustainable. If the business wants to use AI at scale, compliance should request a documented AI data strategy that addresses security, privacy, data quality, governance, accessibility, bias management, and alignment with business objectives.

The report found that security and protection of sensitive data were the most critical elements of such plans, at 59%, followed by clean, usable data quality at 46% and data governance at 41%. That is not just an IT checklist. That is a board conversation.

Bring AI to the Data

The report also discusses a concept compliance professionals need to understand: data gravity. Large and sensitive data sets tend to stay where they are because moving them is costly, slow, and risky. Increasingly, organizations are turning to architectures that bring AI processing to the data rather than moving data to the model. The report highlights approaches, such as zero-copy access and containerized applications, that can reduce latency, control costs, and address security and sovereignty concerns. This matters greatly for compliance.

Many regulated environments cannot simply move sensitive data across systems or borders because a vendor wants a cleaner AI workflow. Privacy laws, localization rules, contracts, and plain good judgment all cut against that approach. If AI can be brought to the data rather than copying data into multiple new environments, the organization may reduce both operational and compliance risk.

Compliance officers do not need to become cloud architects. But they do need to ask the right questions. Are we duplicating sensitive data unnecessarily? Are we crossing jurisdictional lines? Can we explain lineage, access, and security? Are we creating an AI environment that looks controlled or improvised?

Agentic AI: Real Promise, Real Risk

The report is optimistic about the potential of agentic AI for data management. 47% of respondents said their organizations believe agentic AI can solve data quality issues, and 65% expect many business processes to be augmented or replaced by agentic AI over the next 2 years. Experts cited benefits such as mapping data, documenting it, performing quality checks, monitoring drift, and automating routine tasks that previously required significant manual effort.

There is real promise here. Compliance teams spend far too much time on manual work that adds little strategic value. Tools that can responsibly automate mapping, documentation, testing, triage, or drift monitoring deserve serious attention.

But this is no place for magical thinking. The report is equally clear that success requires the right team: data engineers, domain experts, prompt expertise, and a product owner aligned to a business objective. That is the lesson. Agentic AI does not eliminate the need for governance. It raises the stakes for governance. If you automate poor judgment on top of poor data, you do not get efficiency. You get scalable failure.

Five Questions for Every CCO

So what should compliance leaders do now? Start with five questions.

  1. Which AI use cases in our company depend on sensitive, regulated, or high-risk data?
  2. Can we explain the lineage, quality, freshness, permissions, and context of that data?
  3. Do we have a documented AI data strategy, or are we confusing pilots with governance?
  4. Are we moving data in ways that create avoidable privacy, security, or sovereignty risks?
  5. Who owns the meaning of the data?

That final question may be the most important. The report stresses that the business must own the data so it is described properly and used correctly. Data is not just a technical asset. It is a business asset with legal, ethical, and operational meaning. Compliance should insist that meaning be defined before AI starts drawing inferences from it.

The Bottom Line

The great temptation in the AI era is to focus on the model’s brilliance. The wiser course is to focus on the data’s readiness. That is where trust begins. That is where defensibility begins. And that is where sustainable value begins. For compliance professionals, the message is plain. AI governance that ignores data readiness is not governance at all. It is wishful thinking with a dashboard.

The organizations that win with AI will not simply have more tools. They will have better data, better lineage, better controls, better discipline, and better judgment about when and how to use AI. In compliance, that is not glamorous. But it is where real success usually lives.

Categories
Blog

COSO Meets GenAI: The Internal Controls Playbook for Compliance

If you are a compliance professional looking at your company’s GenAI rollout and wondering when the grown-ups will finally arrive, I have good news. They just did.

COSO has now stepped directly into the GenAI conversation with its new paper, Achieving Effective Internal Control Over Generative AI, and that matters a great deal. For those of us in compliance, internal audit, risk, and governance, COSO is not a shiny new acronym trying to catch the latest tech train. COSO is the train schedule. It is the framework that boards, auditors, controllers, and compliance professionals already understand. And with this publication, COSO has done something very important: it has translated GenAI risk into the language of internal control. That is exactly what the market needed.

Because up until now, too much of the GenAI discussion has lived in one of two places. Either it sat in the innovation lab with people talking breathlessly about transformation, or it sat in the legal department where everyone worried, quite correctly, about hallucinations, privacy, and bias. What has often been missing is the operational bridge between aspiration and assurance. COSO gives us that bridge. It says, in effect, GenAI is not outside your control environment. It is now part of it. And if it is part of it, then it must be governed, tested, monitored, and documented like any other significant business capability.

GenAI Does Not Change the Need for Control. It Changes the Terrain

One of the most important points in the COSO paper is that GenAI does not upend the COSO Internal Control-Integrated Framework. Rather, it changes the environment in which those controls operate. The five familiar COSO components remain the same: control environment, risk assessment, control activities, information and communication, and monitoring activities. What changes is the nature of the underlying risk. GenAI introduces probabilistic outputs, model drift, prompt injection, opaque reasoning, rapid configuration changes, and the adoption of shadow AI outside normal approval channels. That is a very useful framing for compliance officers.

It means we should stop treating AI governance as some exotic side project. If GenAI is used in operations, legal, finance, HR, procurement, investigations, or reporting, it belongs within your existing governance architecture. You do not need to invent a new religion. You need to apply the old disciplines to a new set of facts.

This is where compliance can and should lead. We understand what it means to build controls around fast-moving risk. We understand escalation, role clarity, training, monitoring, and accountability. COSO is effectively telling compliance professionals, “You already know more about governing GenAI than you think. Now apply that muscle memory with precision.”

A Capability-First Approach Is a Game Changer

The most practically useful innovation in the COSO guidance is its capability-first taxonomy. Rather than organizing AI controls by vendor, product name, or technical buzzwords, COSO focuses on what the GenAI system actually does. It identifies eight capability types: data extraction and ingestion; data transformation and integration; automated transaction processing and reconciliation; workflow orchestration; judgment, forecasting, and insight generation; AI-powered monitoring and continuous review; knowledge retrieval and summarization; and human-AI collaboration. That is enormously helpful because it is how compliance people actually work.

We do not manage risk by admiring the label on the software box. We manage risk by understanding what a tool does in a process, what can go wrong, how fast it can go wrong, and how the error propagates downstream. A GenAI tool that summarizes policies creates one set of risks. A GenAI agent that routes approvals, posts transactions, or helps shape regulatory disclosures creates another. COSO provides organizations with a common language for distinguishing among use cases and calibrating controls accordingly. That is not just elegant. It is actionable.

The Five Foundational Truths Every CCO Should Memorize

COSO also offers five foundational characteristics for GenAI internal control, and each should be printed and posted on the wall of every compliance office.

First, GenAI is probabilistic, not deterministic. In plain English, it can sound authoritative and still be wrong. Therefore, outputs must be treated as claims requiring validation, not facts to be accepted by default. Second, GenAI is dynamic. Models, prompts, and retrieval data evolve quickly, so controls and monitoring must keep pace. Third, GenAI is easily scalable, meaning it can scale both productivity and error rates. Fourth, it has a low barrier to entry, which is why shadow AI is such a real problem. Fifth, and perhaps most interestingly, GenAI can help govern GenAI through pattern detection, validation, and monitoring.

There is a lot packed into those five points. For compliance, the biggest takeaway is this: static governance will fail in a dynamic AI environment. Annual reviews will not cut it. A once-a-year policy refresh will not cut it. A single training session on acceptable use will not cut it. GenAI governance has to be living governance.

What COSO Says About the Control Environment

COSO starts where it should: tone, structure, and accountability. The paper says organizations need a GenAI acceptable use policy, clear ethical boundaries, oversight and accountability responsibilities, named owners for each AI tool or platform, role-based training, and accountability mechanisms tied not only to adoption but also to safety, compliance, and performance. Boards and cross-functional oversight groups need visibility into adoption, incidents, changes, and risk indicators.

That is a direct message to compliance leaders. If nobody owns the prompts, the retrieval connectors, the model configurations, the escalation path, or the approval structure, then nobody owns the risk. And in a regulatory environment moving steadily toward AI accountability, “nobody owned it” is not a defense. It is an indictment.

I particularly liked COSO’s emphasis that prompts, system prompts, and retrieval connectors should be treated as governed configurations. That is exactly right. Too many companies still treat prompting like an informal user habit rather than a control-relevant configuration choice. In a high-impact context, the prompt is not casual. It is part of the system.

Risk Assessment Must Get More Dynamic

COSO’s discussion of risk assessment is equally strong. It calls for use cases to have clearly defined objectives, acceptable and unacceptable boundaries, and success criteria. It also warns that organizations must first ask whether GenAI is even the right tool for the task. In some cases, traditional automation or deterministic systems may be safer and more reliable. The risk assessment should account for hallucinations, drift, provenance gaps, prompt injection, bias, third-party dependencies, and significant changes such as vendor updates, connector changes, or evolving regulations.

This is where compliance earns its keep. We are the ones who should be asking: What if the model changes quietly? What if the source data becomes stale? What if the retrieval layer excludes a critical policy update? What if the system routes something to the wrong approver? What if the tool is used in a context where a simpler and safer solution would do the job better?

COSO is right to emphasize scenario analysis and living risk registers. In the GenAI era, risk registers that only update annually are museum pieces.

Human-in-the-Loop Is Not Optional

When COSO turns to control activities, it gets very practical. It says GenAI outputs should be subject to human corroboration proportionate to risk, and in high-impact business, legal, or regulatory contexts, AI assistance should be segregated from authoritative decision-making. The paper also calls for version control, audit trails, access restrictions, change management, source citation requirements, segregation of duties, confidence thresholds, and documented approvals for configuration changes. That is the heart of responsible AI governance.

I was also struck by COSO’s discussion of reliance in an ICFR setting. The paper draws an important distinction between situations in which management relies on AI output as evidence of control effectiveness and situations in which a human independently re-performs the work. When true reliance exists, the evidentiary expectations rise: documented prompts, model versions, sampling rationale, exception resolution, and retained evidence.

Even beyond financial reporting, that concept is vital for compliance. The moment your team starts relying on GenAI output for sanctions reviews, due diligence summaries, monitoring alerts, investigative chronology, or policy interpretation, you have to ask a simple question: What is our evidence that this output was reliable enough to trust?

Monitoring Is Where the Real Work Begins

COSO’s final major lesson is that monitoring GenAI is not a one-and-done exercise. Organizations need continuous metrics and periodic deep reviews. They need to track precision, recall, exception volumes, latency, fairness, drift, and outcome quality. They need retraining triggers, rollback protocols, remediation logs, and playbooks for common AI control failures. COSO also makes the excellent point that in probabilistic systems, control failure may no longer be a simple pass-fail matter. Organizations may need multi-metric tolerance ranges across dimensions such as accuracy, bias, leakage, explainability, and change velocity.

That is a sophisticated and realistic view. Compliance teams should take it seriously because it reflects the world we are moving into. AI control effectiveness will not be judged solely by whether a control exists on paper. It will be judged by whether the organization can show that it monitors performance, investigates deviations, remediates failures, and adapts as the technology changes.

The Bottom Line

The real genius of the COSO GenAI framework is that it takes AI out of the abstract and puts it where it belongs: inside the machinery of governance. It turns the conversation from “Do we have an AI policy?” to “Do we have effective internal control over AI use?” That is a far better question.

For compliance officers, the action items are clear. Inventory your GenAI use cases. Classify them by capability. Identify owners. Assess risk dynamically. Put human review where the stakes justify it. Govern prompts and configurations, such as controlled assets. Monitor continuously. And do not let your AI strategy outrun your control environment.

Because in the end, the organizations that benefit most from GenAI will not be the ones that moved fastest with the fewest guardrails. They will be the ones who figured out how to innovate with discipline. That is not bureaucracy. That is a competitive advantage.

Categories
Blog

The GenAI Playbook for Compliance

There is a question I continue to hear from compliance professionals, boards, and senior executives alike: “When will generative AI finally be good enough for us to trust it?” As discussed by Bharat Anand and Andy Wu in their recent Harvard Business Review article The GenAI Playbook for Organizations they believe this is the wrong question.

The better question, and the one every Chief Compliance Officer should be asking right now, is this: “Where can we use GenAI effectively today, with the right controls, to make our compliance program more efficient, more resilient, and more business relevant?” This is their core insight, and they argue that leaders should stop obsessing over whether GenAI is perfect and instead focus on where it can create value now and how strategy, not speed alone, wins.

For the compliance profession, that insight lands with particular force. We are not in the business of chasing shiny objects. We are in the business of managing risk, enabling growth, and preserving trust. GenAI is not a parlor trick. It is becoming an operating reality. The question is no longer whether compliance should engage. The question is whether compliance will lead with discipline or lag while the business adopts AI without it.

Stop Asking Whether AI Is Smart. Start Asking Where Errors Matter.

One of the most useful contributions of the article is its simple yet powerful framework: evaluating GenAI use cases through two lenses. First, what is the cost of error? Second, does the task rely primarily on explicit data or on tacit human judgment? That is gold for compliance.

Too many organizations still evaluate AI in sweeping, binary terms. Either they think it is magical or too dangerous to touch. Neither position is helpful. Compliance officers need a more operational lens. We need to break work into tasks and then ask where automation is appropriate, where human oversight is essential, and where human judgment must remain firmly in control. That is exactly how mature compliance programs should approach GenAI. Not with ideology. With risk assessment.

The “No Regrets” Zone for Compliance

The article identifies a “no regrets” zone: low cost of error, explicit knowledge, and high potential for immediate deployment. Examples include summarizing documents, screening resumes, or handling routine inquiries. In compliance, many early wins live here.

Think about policy summarization, training-content adaptation, meeting-note extraction, initial hotline trend coding, third-party questionnaire triage, basic control documentation, and first-draft responses to routine business questions. None of these tasks should be delegated blindly. But many can be accelerated responsibly.

For instance, a compliance team buried under requests from procurement, HR, sales, and legal can use GenAI to produce first-pass summaries of policies, draft FAQs, organize issue logs, and identify recurring themes from employee questions. That does not replace the compliance professional. It frees that professional to focus on what matters most: judgment, influence, escalation, and strategic problem-solving.

This is where many compliance teams have been and continue to be too timid. They have waited for perfection in a space where perfection was never the benchmark. The benchmark should be whether the tool improves speed, lowers administrative friction, and allows compliance personnel to move up the value chain.

The “Quality Control” Zone Is the Compliance Sweet Spot

The article also identifies a “quality control” zone, where the knowledge is explicit but the cost of error is high. In those cases, GenAI can do substantial work, but humans must verify, review, and retain accountability. The authors cite legal drafting, software development, and financial due diligence as examples. That is the very heartland of compliance.

Consider sanctions screening narratives, third-party due diligence memos, internal investigation chronologies, risk assessment documentation, compliance testing workpapers, and board reporting drafts. These are exactly the kinds of tasks where GenAI can accelerate the heavy lifting, but should never be the final word.

This is also where compliance can bring discipline to the rest of the enterprise. The business may want speed. Compliance must insist on verified speed.  A practical model is straightforward: (1)

GenAI drafts  Humans review  Controls document  Leaders own.

That is not anti-innovation. That is responsible innovation. It is also consistent with what regulators increasingly expect: not the absence of AI, but governance around its use. Whether one looks to the DOJ’s emphasis on effective controls and continuous improvement in the Evaluation of Corporate Compliance Programs, the NIST AI Risk Management Framework, or the growing global focus on AI governance, the message is the same: effective AI governance requires continuous improvement. If your company uses AI in a consequential process, you had better know where it is being used, who is checking it, what data feeds it, and how errors are caught.

The “Human-First” Zone Must Stay Human

The article is particularly strong in its warning about tasks that require tacit knowledge and carry a high cost of error: strategy, sensitive personnel decisions, crisis leadership, and other matters where judgment, ethics, and context are central. In those cases, GenAI may support, but it should not decide. Compliance professionals should print that out and tape it to the wall.

Some activities must remain human-led. Decisions about discipline, executive accountability, remediation after a serious investigation, disclosure strategy, culture assessment, or whether a business relationship “feels wrong” despite facially acceptable paperwork are not suitable for AI-driven decision-making. They require experience, intuition, moral clarity, and often courage.

That does not mean AI has no role. It can assemble facts, surface patterns, propose draft communications, and model possible outcomes. But it cannot own the judgment. In a compliance function, the more consequential the decision, the more important it is that a human being stands behind it. That is not nostalgia. That is governance.

Broad Access Without Chaos

One of the article’s more provocative arguments is that organizations should mandate broad access to GenAI tools because value creation begins when employees can experiment and discover useful applications. At the same time, the authors warn of bottlenecks that trap innovation in slow approval processes. I agree with the spirit of that point, but from a compliance perspective, there must be an important qualifier: broad access does not mean unmanaged access. This is where the compliance function can truly be a business enabler. Compliance should not be the department of “no AI.” It should be the department of “safe AI at scale.” That means several things.

  1. Build a risk-based use policy for GenAI. Employees need clear guidance on prohibited uses, approved tools, escalation triggers, and data-handling requirements.
  2. Classify use cases. Not every AI use case deserves the same scrutiny. A tool for drafting a training outline is not the same as a tool for assessing third-party bribery risk.
  3. Establish review protocols. High-risk outputs require human validation, documented sign-off, and, in some cases, legal or compliance approval.
  4. Train broadly and repeatedly. AI governance cannot live in a PDF on an intranet site. It has to be operationalized through real examples and practical scenarios.
  5. Monitor and improve. If GenAI is being used across the enterprise, compliance should have visibility into where, how, and with what effect.

That is what a mature AI governance program looks like. It is also the same risk management protocol that every compliance professional uses daily.

Data Is the Real Compliance Story

Another important insight from the article is that competitive advantage will come not merely from adopting GenAI but from pairing it with proprietary data, redesigned workflows, and complementary organizational assets. The authors emphasize centralizing data, identifying what data is not yet being collected, and redesigning the organization around AI-enabled learning loops. For compliance, this should be a wake-up call.

Most compliance functions are sitting on a treasure trove of underused data: hotline reports, training metrics, policy attestations, third-party files, gifts and entertainment data, investigation outcomes, audit findings, HR trends, distributor analytics, and culture survey results. Yet in many companies, that information remains fragmented across systems and functions.

If compliance wants to be strategic in the AI era, it has to take data architecture seriously, not simply for reporting, but for insight. The future compliance advantage will go to organizations that can connect signals across functions and convert them into earlier detection, smarter resource allocation, and more tailored interventions. In other words, the future of compliance is not just controls. It is controls plus intelligence.

Three Questions Every CCO Should Ask This Week

So, where does this leave the compliance officer trying to lead in real time? I suggest three immediate questions. First, which compliance tasks are in the “no regrets” zone and should be piloted now? Second, which tasks sit in the “quality control” zone and require a formal human-in-the-loop process? Third, which decisions are so consequential, contextual, or values-laden that they must remain unmistakably human-first?

If you cannot answer those questions, your company does not yet have a GenAI compliance strategy. It has experimentation without governance or caution without direction. Neither is sustainable.

The GenAI era will not reward the fastest organization. It will reward the organization that best aligns technology, governance, data, and human judgment. That is the compliance challenge. It is also a compliance opportunity. Compliance has always been about more than preventing misconduct. At its best, it helps a company make better decisions, allocate trust wisely, and compete with integrity. GenAI does not change that mission. It sharpens it. The playbook is here. The real question is whether compliance will run it.

Categories
Blog

AI Compliance as a Competitive Advantage: Turning Governance Into ROI

In too many organizations, “AI compliance” is treated like a speed bump. Something to route around, manage after launch, or outsource to a vendor deck and a policy that nobody reads. That mindset is not only outdated but also expensive. In 2026, mature AI governance is becoming a commercial differentiator because customers, regulators, employees, and business partners increasingly ask the same question: Can you prove your system is trustworthy?

The most underappreciated truth is that AI risk is not “an AI team problem.” It is a business-process problem, expressed through data, decisions, third parties, and change control. The Department of Justice Evaluation of Corporate Compliance Programs (ECCP) has never been about perfect paperwork; it has been about whether a program is designed, implemented, resourced, tested, and improved. If you can translate that posture into AI, you can convert “compliance cost” into “credibility capital.”

A cautionary backdrop shows why. The EEOC’s 2023 settlement with iTutorGroup serves as a cautionary tale: automated hiring screening that disadvantages older workers can lead to legal exposure, remediation costs, and reputational damage. The details matter less than the pattern; when algorithmic decisions are not governed, the business eventually pays the bill. The compliance professional should see the pivot clearly; governance is the mechanism that lets you move fast without becoming reckless.

From a build-from-scratch, low-to-medium maturity posture, the win is not sophistication. The win is repeatability. If you build an AI governance framework aligned to NIST AI RMF (govern, map, measure, manage), structured through ISO/IEC 42001’s management-system discipline, and cognizant of EU AI Act risk tiering, you get something the business loves: a predictable path from idea to deployment. Today, I will explore five ways mature AI compliance can become a competitive advantage, each with a practical view of how a compliance-focused GenAI assistant can support business processes.

1) Sales and Customer Trust

Trust is a sales feature now, even when marketing refuses to call it that. Customers increasingly ask about data use, model behavior, security controls, and human oversight, and they are doing it in procurement questionnaires and contract negotiations. A mature governance framework lets you answer quickly, consistently, and with evidence, thereby shortening sales cycles and reducing late-stage deal friction. A compliance GenAI can support this by drafting standardized responses from approved trust artifacts such as policies, model cards, DPIAs, and audit summaries; flagging gaps, and routing exceptions to Legal and Compliance before the business overpromises.

For compliance professionals, this lesson is even more stark, as the ‘customers’ of a corporate compliance program are your employees. Some key KPIs you can track are average time to complete AI security and compliance questionnaires; percentage of deals requiring AI-related contractual concessions; number of customer-facing AI disclosures issued with approved templates; and percentage of AI systems with current model documentation and ownership attestations.

2) Regulatory Credibility

Regulators are not impressed by ambition; controls persuade them. NIST AI RMF provides a common language to demonstrate that you mapped use cases, measured risks, and managed them over time, while ISO/IEC 42001 imposes discipline on accountability, documentation, and continual improvement. The EU AI Act’s risk-based approach adds an organizing principle: classify systems, apply controls proportionate to risk, and prove that you did it. A compliance GenAI can help by maintaining a living inventory, prompting owners to complete quarterly attestations, drafting control narratives aligned with the frameworks, and assembling regulator-ready “evidence packs” that demonstrate governance in operation rather than on paper.

For compliance professionals, this lesson is about your gap analysis. You have not aligned your current internal controls with GenAI, governance, or other controls. You should do so. Some key KPIs you can track are percentage of AI systems risk-tiered and documented; time to produce an evidence pack for a high-impact system; number of material control exceptions and time-to-remediation; and frequency of risk reviews for high-impact systems.

3) Faster Product Approvals and Safer Deployment

Speed comes from clarity, not from cutting corners. When decision rights, review thresholds, and required artifacts are defined up front, product teams stop guessing what Compliance will require at the end. That is the management-system advantage: ISO/IEC 42001 treats AI governance like a repeatable operational process with gates, owners, and records, rather than a series of one-off debates. A compliance GenAI can support the workflow by pre-screening new use-case intake forms, recommending the correct risk tier under EU AI Act concepts, suggesting required testing (bias, privacy, safety), and generating the first draft of a launch checklist that the product team can execute.

For compliance professionals, this lesson is that you must run compliance at the speed of your business operations. Some key KPIs you can track are: cycle time from AI intake to approval; percent of launches that pass on first review; number of post-launch “surprise” issues tied to missing pre-launch controls; and percentage of models with human-in-the-loop controls when required.

4) Talent, Recruiting, and Internal Confidence

Top performers do not want to work in a company that treats AI like a toy and compliance like a nuisance. Mature governance creates psychological safety inside the organization: employees know what is permitted, what is prohibited, and how to raise concerns. It also improves recruiting because candidates, especially in technical roles, ask about responsible AI practices, data governance, and ethical guardrails. A compliance GenAI can support internal confidence by serving as the first-line “policy concierge,” answering questions with approved guidance, directing employees to the correct procedures, and logging common questions so Compliance can improve training and communications.

For compliance professionals, this fits squarely within the DOJ mandate for compliance to lead efforts in institutional justice and fairness. Some key KPIs you can track include training completion and comprehension metrics for AI use; the number of AI-related helpline inquiries and their resolution times; employee survey results on comfort raising AI concerns; and the percentage of AI use cases with documented business-owner accountability.

5) Lower Cost of Incidents and More Resilient Operations

AI incidents are rarely just “bad outputs.” They are process failures: poor data lineage, uncontrolled model changes, vendor opacity, missing logs, weak access controls, or no escalation path when harm appears. NIST AI RMF’s “measure” and “manage” functions emphasize monitoring, drift detection, incident response, and continuous improvement, which is precisely how you reduce the frequency and severity of failures. A compliance GenAI can support incident resilience by guiding teams through an AI incident response playbook, helping triage severity, ensuring evidence is preserved (audit logs, prompts, outputs, approvals), and generating lessons-learned reports that connect root cause to control enhancements.

For compliance professionals, this lesson is even more stark, as the ‘customers’ of a corporate compliance program are your employees. Some key KPIs you can track include the number of AI incidents by severity tier; mean time to detect and mean time to remediate; the percentage of high-impact models with drift-monitoring and alert thresholds; and the percentage of third-party AI providers subject to change-control notification requirements.

What “Mature Governance” Looks Like When You Are Building From Scratch

Do not start with a 60-page policy. Start with a few non-negotiables that scale:

  • Inventory and classification: Create a single inventory of GenAI assistants, ML models, and automated decision systems. Classify them by impact using EU AI Act concepts (high-impact versus low-impact) and your own business context.
  • Accountability and decision rights: Assign an owner for each system and require periodic attestations for the highest-risk categories.
  • Standard artifacts: Use lightweight model documentation, data lineage notes, and disclosure templates. If it is not documented, it does not exist for governance.
  • Human oversight and logging: Define when human-in-the-loop is mandatory and ensure logs capture who approved what, when, and why.
  • Third-party AI controls: Contract for transparency, audit support, change notification, and security requirements. Vendor opacity is not a strategy.

This is where ECCP thinking helps. The question is not whether you have a policy. The question is whether the policy is operationalized, tested, and improved. That is the bridge from compliance to competitive advantage.

If you want AI compliance to be a competitive advantage, treat it like a management system that produces evidence, not like a policy library that produces comfort. When governance becomes repeatable, the business can move faster, regulators become more confident, and customers see the difference. That is not a cost center. That is credibility you can take to the bank.

Categories
AI Today in 5

AI Today in 5: February 12, 2026, The AI to the Moon Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Putting AI into your compliance workflow. (Valley Courier)
  2. GenAI and compliance. (FinTechGlobal)
  3. Musk wants to put an AI factory on the Moon. (NYT)
  4. OpenAI disbands safety teams. (TechCrunch)
  5. Is the US ready for what AI will do for jobs? (The Atlantic)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
AI Today in 5

AI Today in 5: January 14, 2026, The Apple Folds Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Apple admits defeat on AI, will use Google to power Siri. (BBC)
  2. Cross-border AI risk. (AI News)
  3. First AI-Native Compliance Platform. (PR Newswire)
  4. How will GenAI secure the trust of compliance? (FinTechGlobal)
  5. Will AI data centers eat up consumers’ electricity? (WSJ)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
AI Today in 5

AI Today in 5: November 19, 2025, The Turning No into Flow Edition

Welcome to AI Today in 5, the newest edition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Of APIs and AI. (Forbes)
  2. Will 2026 redefine GenAI and compliance risk? (PR Newswire)
  3. Energy is key for AI’s next chapter. (Trading View)
  4. New report on the CEO’s Guide to AI Transformation. (AINews)
  5. Teaching students to shape AI. (BusinessInsiderAfrica)

For more information on the use of AI in Compliance programs, see my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com