Categories
Daily Compliance News

Daily Compliance News: April 22, 2026, The AI Hallucinations from Sullivan & Cromwell Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Ex-Algerian Minister of Industry jailed for corruption. (Aljazeera)
  • A wish list for John Ternus. (NYT)
  • Best 5 books on the Fed. (WSJ)
  • AI hallucinations from Sullivan & Cromwell court filing. (FT)

Interested in attending Compliance Week 2026? Click here for information and Registration. Listeners to this podcast receive a 20% discount on the event. Use the Registration Code TOMFOX 20

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
AI Today in 5

AI Today in 5: April 22, 2026, The AI Ready Lawyer Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. The AI-ready lawyer. (Wolters Kluwer)
  2. MetaComp launches AI agent governance framework. (PR Newswire)
  3. APAC CFOs embrace AI. (Wolters Kluwer)
  4. What the AI mirror reveals about us. (BankInfoSecurity)
  5. OpenAI is providing cyber protection for banks. (FinTechMagazine)

Interested in attending Compliance Week 2026? Click here for information and Registration. Listeners to this podcast receive a 20% discount on the event. Use the Registration Code TOMFOX20

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Blog

Trust Is Not a Control: The Drop-In AI Audit

There is a hard truth at the center of modern AI governance that every compliance professional needs to confront: trust is not a control. For too long, organizations have approached AI oversight with a familiar but outdated mindset. They collect a vendor certification. They review a policy statement. They ask whether a third party is “aligned” with a recognized framework. Then they move on, assuming the governance box has been checked. In today’s enforcement and risk environment, that approach is no longer good enough.

The Department of Justice has repeatedly made this point in its Evaluation of Corporate Compliance Programs. The DOJ does not ask whether a company has a policy on paper. It asks whether the program is well designed, whether it is applied earnestly and in good faith, and, most importantly, whether it works in practice. That final phrase matters. Works in practice. It is the dividing line between performative governance and effective governance.

That is why every compliance program now needs a drop-in AI audit. It is not simply another diligence exercise. It is a mechanism for proving that governance is real. It is a practical third-party risk tool. And it is one of the clearest ways to operationalize the ECCP in the age of artificial intelligence.

The Problem: Third-Party AI Risk Is Moving Faster Than Oversight

Most companies do not build every AI capability internally. They rely on vendors, service providers, cloud platforms, embedded applications, analytics partners, and other third parties whose tools increasingly shape business processes and compliance outcomes. In many organizations, these third parties now influence investigations, due diligence, monitoring, onboarding, reporting, customer interactions, and internal decision-making. That creates a new class of third-party risk.

The problem is not only whether a vendor has responsible AI language in its contract or whether it can point to a certification. The problem is whether your organization can verify that the relevant controls are functioning as represented in the real-world use case affecting your business. That is where too many compliance programs still fall short.

Under the ECCP, the DOJ asks whether a company’s risk assessment is updated and informed by lessons learned. It asks whether the company has a process for managing risks presented by third parties. It asks whether controls have been tested, whether data is available to compliance personnel, and whether the company can demonstrate continuous improvement. These are not abstract questions. They go directly to how you oversee AI-enabled third parties. If your third-party AI governance begins and ends with a questionnaire and a PDF certification, you do not have evidence of governance. You have evidence of intake.

What a Drop-In Audit Really Does

A drop-in AI audit changes the question from “What does the third party say?” to “What can the third party prove?” That is a profound shift.

The value of the drop-in audit is that it brings compliance discipline directly into third-party AI oversight. Instead of accepting broad claims about safety, control, and alignment, you examine operational evidence. Instead of relying solely on design statements, you test for performance in practice. Instead of treating governance as a one-time approval event, treat it as a repeatable audit process. In that sense, the drop-in audit becomes proof of governance.

It also becomes a far more mature third-party risk tool. You are no longer merely assessing whether a vendor appears sophisticated. You are assessing whether a third party can withstand scrutiny on the questions that matter most: scope, controls, traceability, escalation, and evidence.

And from an ECCP perspective, that is precisely the point. The DOJ has emphasized that compliance programs must move beyond paper design into operational reality. A drop-in audit is one of the few mechanisms that let you do that in a disciplined, documentable way.

From Vendor Oversight to Third-Party Governance

This discipline should not be limited only to classic vendors. The better view is to expand the concept across all third parties that provide, influence, host, or materially shape AI-enabled services. That includes software providers, outsourced service partners, embedded AI functionality in enterprise tools, cloud-based analytics environments, compliance technology vendors, and any external party whose systems affect business-critical decisions or regulated processes.

Risk does not care about the label on the contract. If the third party’s AI affects your organization’s screening, monitoring, investigations, decision support, or disclosures, the compliance risk is real. Your governance process must be equally real. This is why “trust but verify” is no longer just a slogan. It is a design principle for third-party oversight of AI.

The Core Elements of the Drop-In Audit

A strong drop-in audit has three features: sampling, contradiction testing, and escalation.

1. Sampling: Evidence of Operation, Not Merely Design

Sampling is where governance becomes tangible. A company requests specific artifacts tied to actual use cases and actual control operations. This may include scope documents, Statements of Applicability, system documentation, training data summaries, access controls, incident records, runtime logs, or evidence of human review. The point is simple: operational evidence is what matters.

This is where a compliance function moves from hearing about controls to seeing them in action. It is also where internal audit can add real value by testing whether the evidence supports the stated control environment.

2. Contradiction Testing: Where Real Risk Emerges

This is one of the most important and underused concepts in third-party AI oversight. Inconsistencies between claims and reality are where governance failures emerge. If a third party says its certification covers a given service, does the scope document confirm it? If it claims strong incident response, does the record back it up? If it represents strong human oversight, do the runtime traces show meaningful intervention or only theoretical review points?

Contradiction testing is powerful because it goes to credibility. It tests whether the governance narrative matches the operating reality. Under the ECCP, that is exactly the kind of inquiry prosecutors and regulators will care about. It speaks to effectiveness, honesty, and control discipline.

3. Escalation: Governance in Action

Governance without consequences is not governance. A drop-in audit must include clear escalation triggers. Missing evidence, mismatched certification scope, unexplained gaps, unresolved incidents, or inconsistent remediation should not be noted in isolation. They should trigger action.

That action may include enhanced diligence, contractual remediation, independent validation, temporary use restrictions, or deeper audit review. The important point is that the program responds. This is where the drop-in audit becomes operationalizing the ECCP. It demonstrates that the company not only identifies risk but also acts on it.

How the Drop-In Audit Maps to the ECCP

The drop-in audit aligns tightly with the DOJ’s framework for an effective compliance program. Risk assessment is addressed because the audit focuses attention on where AI-enabled third parties create actual operational and control exposure. Policies and procedures are tested because the company does not merely accept them at face value. It assesses whether the stated controls are supported by evidence. Third-party management is strengthened by making oversight continuous, risk-based, and verifiable. Testing and continuous improvement are built into the audit process, which identifies gaps, contradictions, and corrective actions. Investigation and remediation principles are reinforced by documenting, escalating, and using findings to improve the control environment.

Most importantly, the audit answers the ECCP’s central practical question: Does the program work in practice?

How the Drop-In Audit Maps to NIST AI RMF

The NIST AI Risk Management Framework provides a highly useful structure for the drop-in audit, especially through its Govern, Map, Measure, and Manage functions.

  1. Governance is reflected in defined ownership, accountability, and escalation when issues are identified.
  2. A map is reflected in understanding the third party’s actual AI use case, scope, dependencies, and business impact.
  3. The measure is reflected in the use of evidence, runtime observations, contradiction testing, and performance assessment.
  4. Management is reflected in remediation, ongoing oversight, and updates to controls based on audit findings.

In this way, the drop-in audit becomes a practical tool for taking the NIST AI RMF from concept to execution.

How the Drop-In Audit Maps to ISO/IEC 42001

ISO/IEC 42001 adds the management-system discipline that compliance programs need. Its value lies in documented scope, role clarity, control applicability, monitoring, corrective action, and continual improvement. A drop-in audit fits naturally into that structure because it tests whether those elements are visible in operation, not merely stated in documentation.

The Statement of Applicability becomes meaningful when the company verifies that the controls identified there actually correspond to the deployed service. Monitoring becomes meaningful when evidence is examined. Corrective action becomes meaningful when gaps trigger follow-up. Continual improvement becomes meaningful when findings are fed back into governance. That is why the documentation you generate should serve your board, regulators, and internal stakeholders without additional work. Producing evidence that travel is one of the most strategic benefits of this approach.

Why Every Compliance Program Needs This Now

The strategic payoff is straightforward. Strong AI governance is not a drag on innovation. It is what allows innovation to scale with trust. A drop-in audit gives compliance and internal audit a mechanism to test what matters, document their findings, and create evidence that withstands scrutiny. It moves governance from assertion to proof. It transforms third-party diligence into a repeatable, auditable process. It helps ensure that when regulators, boards, or business leaders ask how the company knows its third-party AI governance is working, there is a real answer.

Because, in the end, evidence of governance matters. Not narratives. Not slide decks. Evidence. President Reagan was right in the 1980s, and he is still right today: “Trust but verify.”

Categories
AI Today in 5

AI Today in 5: April 21, 2026, The 7 Questions You Should Ask Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. 7 questions to ask about AI and compliance. (The News Tribune)
  2. Compliance can outsource tools to AI but not judgment. (FinTech Global)
  3. Data Authenticity and Accountability for AI. (CCI)
  4. Do AI chatbots make you stupider? (BBC)
  5. ICU nurses get AI help. (HealthcareItNews)

Interested in attending Compliance Week 2026? Click here for information and Registration. Listeners to this podcast receive a 20% discount on the event. Use the Registration Code TOMFOX20

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Blog

AI Disclosures, Controls, and D&O Coverage: Closing the Governance Gap Around Artificial Intelligence

A new governance gap is emerging around artificial intelligence, and it is one that Chief Compliance Officers, compliance professionals, and boards need to confront now. It sits at the intersection of three areas that too many companies still treat separately: public disclosures, internal controls, and insurance coverage. That siloed approach is no longer sustainable.

As companies speak more confidently about their AI strategies, insurers are becoming more cautious about the risks those strategies create. That tension matters. It signals that the market is beginning to see something many organizations have not yet fully addressed: when a company’s statements about AI outpace its actual governance, the exposure is not merely operational or reputational. It can become a disclosure issue, a board oversight issue, and ultimately a proof-of-governance issue under the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP).

For the compliance professional, this is not simply an insurance story. It is a compliance integration story. The question is whether the company can align its statements about AI, the controls it has in place, and the protections it believes it has in place if something goes wrong.

The New Governance Gap

Many organizations are eager to describe AI as a source of innovation, efficiency, better decision-making, or competitive advantage. Those messages increasingly appear in earnings calls, investor decks, public filings, marketing materials, and board presentations. Yet the underlying governance structures often remain immature. That disconnect is the governance gap.

It appears when management speaks broadly about responsible AI but has not built a complete inventory of AI use cases. It appears when companies discuss oversight but cannot show testing, documentation, or monitoring. It appears that boards assume that insurance will respond to AI-related claims without understanding how new policy language may narrow coverage.

This is where D&O coverage becomes so important. It is not the center of the story, but it is a revealing signal. If insurers are revisiting policy language and introducing exclusions or limitations tied to AI-related conduct, it suggests the market sees governance risk. In other words, the insurance market is sending a message: AI-related claims are no longer hypothetical, and companies that cannot demonstrate disciplined oversight may find that risk transfer is less available than they assumed.

Why the ECCP Should Be the Primary Lens

The DOJ’s ECCP remains the most useful framework for analyzing this issue because it asks exactly the right questions.

Has the company conducted a risk assessment that accounts for emerging risks? Are policies and procedures aligned with actual business practice? Are controls working in practice? Is there proper oversight, accountability, and continuous improvement? Can the company demonstrate all of this with evidence? Those are compliance questions, but they are also the right AI governance questions.

If a company makes public statements about AI capability, oversight, or reliability, the ECCP lens requires more than aspiration. It requires substantiation. Can the company show who owns the AI risk? Can it demonstrate how models or systems are tested? Can it show escalation procedures when problems arise? Can it document how AI-related decisions are monitored, reviewed, and improved over time?

If the answer is no, then the issue is not simply that the company may have overpromised. The issue is that its compliance program may not be adequately addressing a material emerging risk. That is why CCOs should view AI as a cross-functional challenge requiring integration across legal, compliance, technology, risk, audit, investor relations, and the board.

AI Disclosure Must Be Evidence-Based

One of the most practical steps a compliance function can take is to push for an evidence-based disclosure process around AI. This means that public statements about AI should not be driven solely by enthusiasm, market pressure, or executive optimism. They should be grounded in underlying documentation. If the company says it uses AI responsibly, where is the governance framework? If it claims AI improves decision-making, what testing supports that assertion? If it says it has safeguards, where are the control descriptions, monitoring results, and escalation records?

This is not about suppressing innovation. It is about ensuring that disclosure discipline keeps pace with technological ambition. For boards, this means asking harder questions before approving or relying on public AI narratives. For compliance officers, it means helping management build the evidentiary record that turns broad statements into defensible representations.

Controls Must Catch Up to Strategy

This is where the “how-to” work begins. Compliance professionals should begin by creating a structured inventory of AI use cases across the enterprise. That inventory should identify where AI is being used, what decisions it informs, what data it relies on, who owns it, and what risks it entails.

Once that inventory exists, risk tiering should follow. Not every AI use case carries the same compliance significance. A low-risk productivity tool does not need the same oversight as a system that affects investigations, third-party due diligence, customer interactions, financial reporting, or core operational decisions.

From there, the company can design controls proportionate to risk. High-impact uses of AI should have documented governance, human review where appropriate, testing protocols, escalation triggers, and monitoring requirements. The compliance team should be able to answer a simple question: where are the controls, and how do we know they work? That is the heart of the ECCP inquiry.

Where NIST AI RMF and ISO/IEC 42001 Fit

This is also where the NIST AI Risk Management Framework and ISO/IEC 42001 become highly practical tools. NIST AI RMF helps organizations govern, map, measure, and manage AI risks. For compliance professionals, this provides a disciplined structure for identifying AI use cases, understanding impacts, assessing reliability, and managing response. It is especially useful in linking abstract AI risk to operational decision-making.

ISO/IEC 42001 brings management system discipline to AI governance. It focuses on defined roles, documented processes, control implementation, monitoring, internal review, and continual improvement. That makes it an excellent bridge between policy and execution. Together, these frameworks help operationalize the ECCP. The ECCP tells you what an effective compliance program should be able to demonstrate. NIST AI RMF helps structure the risk analysis. ISO 42001 helps embed those requirements into a repeatable governance process.

For CCOs, the practical lesson is clear: use these frameworks not as academic overlays, but as working tools to build ownership, documentation, testing, and accountability.

Insurance Is a Governance Input

Companies also need to stop treating insurance as an afterthought. D&O coverage should be considered a governance input, not merely a downstream purchase. If policy language is narrowing around AI-related claims, boards and compliance leaders need to understand what that means. What scenarios might raise disclosure-related allegations? Where is ambiguity in coverage? What assumptions has management made about protection that may no longer hold?

Compliance does not need to become an insurance specialist. But it does need to ensure that disclosure, governance, and risk transfer are aligned. If the company is making strong public claims about AI while carrying unexamined governance weaknesses and uncertain coverage, that is precisely the kind of mismatch that can trigger a crisis.

Closing the Gap Before It Becomes a Failure

The larger lesson is straightforward. AI governance is not simply about technology controls. It is about integration. It is about ensuring that what the company says, what it does, and what it can prove all line up. That is why the governance gap matters so much. It is the space where strategy outruns structure, where disclosure outruns evidence, and where confidence outruns control. For boards and compliance professionals, the task is to close that gap before it becomes a failure.

The companies that do this well will not necessarily be the ones moving the fastest. They will be the ones building documented, tested, monitored, and governed AI programs that stand up to regulatory scrutiny, investor pressure, and real-world disruption. That is not bureaucracy. That is the price of sustainable innovation.

Categories
AI Today in 5

AI Today in 5: April 20, 2026, The Jassy’s Rules for AI and FinTech Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Agentic AI demands new cyber protections. (CX Today)
  2. Top markets for AI-driven AML compliance. (FinTech Global)
  3. Legal AI depends on trust, authoritative content, and workflows. (Wolters Kluwer)
  4. AI is reshaping medical device compliance. (Today’s Medical Developments)
  5. Jassy’s rules for AI fintech. (FinTech Magazine)

Interested in attending Compliance Week 2026? Click here for information and Registration. Listeners to this podcast receive a 20% discount on the event. Use the Registration Code TOMFOX 20

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Blog

AI Concentration Risk: A New Third-Party and Operational Resilience Challenge for Compliance

For years, concentration risk was treated as someone else’s problem. Procurement is worried about sole-source vendors. Treasury worried about counterparty exposure. Supply chain teams worried about bottlenecks. Compliance, by contrast, often sat one step removed from those conversations. In the age of enterprise AI, that separation no longer works.

Today, AI concentration risk is a front-line compliance issue. When a company’s most important AI-enabled processes depend on a small number of cloud providers, model vendors, chip suppliers, or geographic regions, that dependency is not merely an operational detail. It is a governance decision. And when that dependency is not identified, documented, tested, and managed, it becomes evidence of weak oversight that regulators and prosecutors understand very well.

That is why Chief Compliance Officers (CCOs) need to move AI concentration risk out of the technology silo and into the compliance program. This is not simply about resilience. It is about whether the company can demonstrate, under the DOJ’s Evaluation of Corporate Compliance Programs (ECCP), that it has identified a material risk, assigned ownership, designed controls, tested those controls, and escalated what matters. In other words, AI concentration risk is now a test of whether governance is real.

Why AI Concentration Risk Belongs in Compliance

At its core, AI concentration risk arises when a company becomes overly dependent on a small number of external providers, infrastructure layers, or geographic regions to support key AI-enabled operations. This is a classic third-party risk problem because it involves reliance on outside parties for critical services. It is also an operational resilience problem because a failure at one of those chokepoints can disrupt business continuity, customer commitments, internal reporting, investigations, monitoring, or other compliance-relevant functions.

For compliance professionals, that should sound familiar. The ECCP has long required companies to identify their risk universe, tailor controls accordingly, allocate resources to higher-risk areas, and continuously assess whether those controls are working in practice. The DOJ asks whether compliance programs are well designed, adequately resourced, empowered to function effectively, and tested for real-world performance. AI concentration risk fits squarely within that framework.

If your company relies on a single model provider for third-party screening, a single cloud region for transaction monitoring, or a single AI vendor for investigation triage, then a disruption is not simply an IT problem. It may affect the company’s ability to prevent misconduct, detect red flags, escalate allegations, and maintain reliable controls. If management cannot explain those dependencies and cannot show what has been done to mitigate them, that is evidence of under-governance.

The ECCP as the Primary Lens

The ECCP provides a highly practical framework for thinking about AI concentration risk by forcing compliance professionals to ask implementation questions rather than merely conceptual ones.

  1. Has your company conducted a risk assessment that includes AI dependency and concentration? Many organizations assess AI bias, privacy, and cybersecurity risk, but far fewer assess whether a small number of vendors represent single points of failure.
  2. Has your company translated that risk assessment into policies, procedures, and controls? It is not enough to know that dependency exists. The compliance question is whether there are controls in place for vendor onboarding, backup arrangements, portability, incident escalation, contractual protections, and contingency planning.
  3. Have those controls been tested? The ECCP is clear that paper programs are not enough. A company needs to know whether its controls function in practice. If there is a multi-cloud failover plan or an alternate-model runbook, has it actually been exercised?
  4. Has ownership been assigned? The DOJ repeatedly focuses on accountability. Someone must own the risk, someone must own the mitigation plan, and someone must report it to leadership.
  5. Is there evidence? Under the ECCP, documentation matters because it shows that a company did not merely talk about governance but operationalized it. In the AI context, this means inventories, risk rankings, contracts, testing logs, escalation protocols, incident reviews, and committee reporting. It is still Document Document Document.

Where Compliance Should Look First

For CCOs, the best way to begin is to map AI concentration risk across three layers.

The first is the infrastructure layer. Which GPU, accelerator, or compute providers support the organization’s most important AI functions? Is there heavy dependence on a single supplier or downstream foundry chain? Even if compliance does not make technical decisions, it should understand whether there is material operational exposure concentrated in a single location.

The second is the cloud and hosting layer. Which cloud providers and regions support production AI workloads? Are critical applications concentrated in one geography or one platform? Have failover and disaster recovery been tested, or are they merely theoretical?

The third is the model and application layer. Which model vendors, API providers, or AI-enabled workflow tools sit inside key business processes? Here is where the third-party risk lens becomes especially important. If one provider supports sanctions screening, hotline triage, policy search, transaction monitoring, or investigation workflows, the disruption risk is directly relevant to compliance effectiveness.

This is where a CCO should work closely with procurement, legal, IT, enterprise risk, and internal audit. The goal is not to take over technology governance. The goal is to ensure that AI concentration risk is incorporated into the company’s existing compliance and third-party risk architecture.

Building Practical Controls

Your approach should be practical and programmatic. First, start with inventory and classification. You cannot govern what you have not identified. Compliance should push for an inventory of AI use cases and the vendors, cloud environments, and model providers that support them. Those use cases should then be tiered based on business criticality, regulatory sensitivity, and operational dependency.

Next, update third-party due diligence. Traditional diligence questions around financial stability, security, and legal compliance remain important, but AI vendors should also be assessed for concentration-related risks. Can data and workflows be ported? Are there fallback options? What are the provider’s subcontracting dependencies? What audit rights exist? How are outages escalated?

Then move to contract design. This is where many compliance programs can add real value. Contracts should address incident notification, business continuity, data export, transition assistance, audit rights, service levels, and escalation expectations. Where concentration is likely to become significant, enhanced contractual protections should be mandatory.

After that, build contingency runbooks. If a model provider becomes unavailable, what happens? If a cloud region goes down, how quickly can key compliance processes be rerouted? If a vendor changes pricing or access terms, what is the escalation path? These runbooks should be documented, assigned to owners, and tested.

Finally, establish escalation thresholds. Governance is strongest when the company decides in advance what degree of concentration requires mitigation. For example, if more than half of a key compliance workflow depends on a single external provider, that may trigger a review by the board or executive committee. If a single region hosts a material portion of compliance-critical AI activity, failover testing may become mandatory.

Where NIST AI RMF and ISO/IEC 42001 Help

This is where the NIST AI Risk Management Framework and ISO/IEC 42001 become highly valuable for compliance officers. They help translate high-level concern into disciplined governance.

The NIST AI RMF emphasizes the Govern, Map, Measure, and Manage phases. That structure is especially useful here. Governance means assigning responsibility and setting risk appetite. Mapping means identifying where concentration exists and which business processes depend on it. Measuring means assessing the degree of dependency and resilience. Managing means putting in place mitigation, monitoring, and response mechanisms.

ISO/IEC 42001 adds an equally important management system discipline. It pushes organizations to define roles, document controls, monitor performance, conduct periodic reviews, and drive continual improvement. In other words, it helps turn AI governance into an operating system rather than a one-time project.

For compliance professionals, the lesson is clear. Use ECCP to define what effectiveness and accountability should look like. Use NIST AI RMF to structure the risk analysis. Use ISO 42001 to embed the resulting controls into a repeatable management process.

Proof of Governance in the AI Era

The deeper point is that AI concentration risk is no longer a hidden architecture issue. It is a test of whether the compliance function can help the enterprise identify dependencies before they fail. Under the ECCP, regulators are not simply asking whether a company had good intentions. They are asking whether it identified real risks, assigned responsibility, implemented controls, tested those controls, and learned from experience.

That is why AI concentration risk matters so much. It reveals whether the company understands how fragile its AI-enabled processes may be. It reveals whether third-party governance is keeping up with technological dependence. And it reveals whether compliance is engaged early enough to shape resilience rather than merely respond to disruption.

For the modern CCO, this is not a niche issue. It is a live example of how compliance adds value by helping the company operationalize governance before a crisis arrives.

Conclusion

In the end, AI concentration risk is not about servers, chips, or software contracts. It is about whether a company understands its vulnerabilities and has the discipline to govern them before they become failures. That is the heart of modern compliance. The issue is not whether disruption will come. The issue is whether your organization has done the hard work in advance to map dependency, build resilience, assign accountability, and prove that its controls can hold under pressure.

That is why this issue belongs squarely on the CCO’s agenda. Under the ECCP, a company must do more than claim it takes risk seriously. It must show its work. It must show that it identified the risk, assessed it, built controls around it, tested those controls, and updated them as the business evolved. The NIST AI Risk Management Framework and ISO/IEC 42001 help provide the structure. But the real challenge, and the real opportunity, belongs to compliance.

Because in the AI era, concentration risk is not merely a technical fragility. It is a governance signal. And the companies that can identify it, manage it, and document it will not only be more resilient. They will be able to demonstrate something even more valuable: that their compliance program is working exactly as it should.

Categories
AI Today in 5

AI Today in 5: April 17, 2026, The AI in Life Sciences Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. How AI is transforming life sciences.(White & Case)
  2. FCA targets AI use. (FinTech Global)
  3. AI under new GSE mandates. (HousingWire)
  4. AI-related litigation increases. (CDF Labor Law)
  5. Why are so many Americans using AI in healthcare? (PBS News)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
AI in Healthcare

AI in Healthcare: Five Healthcare AI Stories You Need to Know This Week – April 17, 2026

Welcome to AI in Healthcare in 5 Stories. This podcast is a Weekly Briefing of the five most important AI developments shaping healthcare, medicine, and life sciences. Each week, Tom Fox breaks down the latest stories on clinical innovation, regulation, privacy, compliance, patient safety, and operational transformation through a practical, business-focused lens. Designed for healthcare compliance professionals, executives, legal teams, clinicians, and industry leaders, the podcast moves beyond headlines to explain what each development means in the real world.

The top five stories for the week ending April 17, 2026, include:

  1. Why are so many Americans using AI in healthcare? (PBS News)
  2. AI requires a rethinking of healthcare architecture. (Stat News)
  3. Study finds AI misdiagnoses up to 80% of early cases. (FT)
  4. In AI, where is your PII stored? (HealthcareFinance)
  5. Increasing enforcement around AI in healthcare. (HealthcareITNews)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Blog

AI as a Force Multiplier for Compliance: From Efficiency Tool to Program Effectiveness

There is a temptation in every wave of new technology to focus first on speed. How much faster can we do the work? How many hours can we save? How many tasks can we automate? Yet for the compliance professional, those are not the right first questions. The right first question is always: does this make our compliance program more effective?

That is why the recent Moody’s discussion of GenAI is so interesting when viewed through a compliance lens. The article describes AI not simply as a productivity engine, but as a tool that changes how professionals interact with information, generate insights, and support decision-making. It emphasizes workflow transformation, role-based support, auditability, data quality, and the need for governance and human oversight . For compliance officers, that is the real story. AI can indeed make work faster. But its true promise is that it can make compliance more targeted, more consistent, more responsive, and more operationally embedded.

The Department of Justice has been telling us for years, through the Evaluation of Corporate Compliance Programs (ECCP), that effectiveness is the standard. The questions are not whether a company has a policy on the shelf or a training module in the system. The questions are whether the company has access to data, whether it uses that data, whether controls are tested, whether issues are triaged appropriately, whether lessons learned are fed back into the program, and whether the program evolves as risks change. AI, properly governed, can help answer yes to each of those questions.

AI and the Compliance Program of the Future

The Moody’s paper notes that GenAI is moving from passive, knowledge-based support toward more action-oriented solutions that can assist with complex, multi-step workflows . That observation should resonate with every Chief Compliance Officer. The future is not an AI toy that drafts emails. The future is an AI-enabled compliance architecture that helps the function move from reactive to proactive.

Consider third-party due diligence. Most compliance teams still struggle with volume, fragmentation, and prioritization. Information sits in onboarding questionnaires, sanctions screens, beneficial ownership reports, payment histories, audit findings, hotline allegations, and open-source media. The challenge is not merely gathering that information. The challenge is turning it into risk-based action. AI can help synthesize disparate information sources, surface red flags, identify missing documentation, and create a more coherent risk picture. Under the ECCP, that supports a more thoughtful, risk-based approach to third-party management.

Take investigations triage. Every mature speak-up program faces the same problem: how to distinguish between the urgent, the important, and the routine. AI can help sort allegations by subject matter, geography, potential legal exposure, prior related issues, implicated business units, and urgency indicators. That does not mean AI decides guilt, materiality, or discipline. It means AI helps compliance direct scarce investigative resources where they matter most. In ECCP terms, it strengthens case handling, responsiveness, consistency, and root-cause readiness.

Now think about risk assessment. The best compliance risk assessments are dynamic, not annual rituals. AI can assist in identifying patterns across reports, controls failures, investigation outcomes, gifts and entertainment data, third-party activity, and regulatory developments. It can help compliance professionals see concentrations of risk earlier and with greater context. In a program built around continuous improvement, that is a force multiplier.

Effectiveness, Not Mere Automation

One of the most important lessons from the Moody’s article is that the value of AI lies in supporting higher-value analytical work, not just reducing routine effort. That is exactly how compliance leaders should approach deployment.

Transaction monitoring is a good example. Many organizations already use rules-based systems, but these often produce high volumes of noise. AI can support better prioritization, pattern recognition, and anomaly detection. It can help identify clusters of conduct that might otherwise remain hidden across vendors, employees, geographies, or payment channels. But the point is not simply to clear alerts faster. The point is to make the monitoring program smarter, more risk-based, and more defensible.

The same is true in training and communications. Too much compliance training remains generic, static, and detached from actual risk. AI opens the door to role-based, scenario-based, and even timing-based communications. A sales team in a high-risk market should not receive the same examples as procurement professionals dealing with third parties. A manager with hotline escalation responsibilities should not receive the same training as a new hire. AI can help tailor content, refresh scenarios, and improve accessibility. Under the ECCP, that supports effectiveness in training design, communications, and accessibility of guidance.

Speak-up and case management also stand to benefit. AI can help identify repeat issue patterns, detect retaliation indicators, cluster similar allegations, and flag unresolved themes across regions or functions. Done correctly, it can help compliance move from case closure to issue intelligence. That is where a hotline becomes not just a reporting channel but an early warning system.

Governance Is the Price of Admission

Here is where the compliance professional earns his or her stripes. The Moody’s piece is explicit that none of this works without robust governance, trustworthy data, transparency, documentation, validation, and human expertise remaining central to critical decisions . That is the bridge to both the NIST AI Risk Management Framework (NIST AI RMF) and ISO/IEC 42001.

NIST AI RMF gives compliance teams a practical way to think about governance, mapping, measurement, and management. ISO/IEC 42001 provides a management-system structure for implementing AI governance in an enterprise setting. Together with the ECCP, they provide a powerful architecture. The ECCP asks whether your compliance program works. NIST AI RMF helps define and manage AI risk. ISO/IEC 42001 helps operationalize governance and accountability.

What does that mean on the ground for  your compliance regime?

It means every AI use case in compliance should have a defined business purpose, an identified owner, approved data sources, documented limitations, escalation criteria, testing protocols, and monitoring for drift or unintended consequences. It means AI outputs should be reviewable. It means prompt logs, source provenance, and validation results should be retained where appropriate. It means employees should know when they are permitted to rely on AI and when human review is mandatory. It means there must be clear boundaries around privacy, privilege, confidentiality, bias, and record retention.

Most of all, it means compliance should resist the easy sales pitch that AI is a substitute for professional judgment. It is not. It is a force multiplier for judgment.

The Board and Senior Management Imperative

Boards and senior leaders should be asking a straightforward question: are we using AI to make compliance more effective, or are we simply using it to do old tasks faster? Those are not the same thing. A mature answer would include at least five elements. First, a risk-based inventory of compliance AI use cases. Second, governance over data quality and model performance. Third, defined human-review thresholds for consequential decisions. Fourth, ongoing monitoring and periodic validation. Fifth, a feedback loop so lessons from investigations, audits, and operations improve the system over time.

That is very much in line with both the ECCP and the Moody’s article’s emphasis on verifiable data, decision auditability, and governance at scale.

Five Lessons Learned

  1. Start with effectiveness, not efficiency. If AI only helps you do low-value tasks faster, you have not transformed compliance. Use it where it improves risk identification, triage, analysis, and action.
  2. Build around the ECCP. The DOJ already gave compliance professionals the framework. Use AI to strengthen risk assessment, third-party management, investigations, training, and continuous improvement.
  3. Govern the data before you celebrate the tool. Bad data, undocumented prompts, or unvalidated outputs will undermine trust. Governance over data provenance and output review is essential.
  4. Keep humans in the loop where it matters. AI can assist with pattern recognition, drafting, prioritization, and synthesis. It should not replace judgment on materiality, discipline, escalation, privilege, or remediation.
  5. Treat AI as part of your compliance operating model. This is not an innovation side project. It should be documented, tested, monitored, and improved like any other core compliance process.

The bottom line is this: AI offers compliance functions a genuine opportunity to become more effective, more focused, and more business relevant. But that opportunity only becomes real when it is grounded in governance, disciplined by the ECCP, and supported by frameworks like NIST AI RMF and ISO/IEC 42001. Done right, AI will not diminish the role of the compliance professional. It will elevate it.