Categories
The Ethics Experts

Episode 249 – Joe Murphy (Part 3)

In this episode of The Ethics Experts, Nick Gallo welcomes Joe Murphy.

Joe Murphy, CCEP, is the editor of Compliance and Ethics: Ideas & Answers, a weekly newsletter for compliance and ethics professionals around the world. For over 45 years Joe has been a tireless champion of compliance and ethics in organizations and has done compliance work on six continents. Joe has published over 100 articles and given over 200 presentations in 21 countries. Joe is author of 501 Ideas for Your Compliance & Ethics Program and A Compliance & Ethics Program on a Dollar a Day. He is a Certified Compliance & Ethics Professional and a former member of the board of the Society of Corporate Compliance & Ethics. Joe was named one of The National Law Journal’s 50 Governance, Risk and Compliance Trailblazers and Pioneers 2014 and received SCCE’s Compliance and Ethics Award. He has been recognized as a lifetime member of the Australian Compliance Institute.

Connect with Joe on LinkedIn

Categories
AI Today in 5

AI Today in 5: April 20, 2026, The Jassy’s Rules for AI and FinTech Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Agentic AI demands new cyber protections. (CX Today)
  2. Top markets for AI-driven AML compliance. (FinTech Global)
  3. Legal AI depends on trust, authoritative content, and workflows. (Wolters Kluwer)
  4. AI is reshaping medical device compliance. (Today’s Medical Developments)
  5. Jassy’s rules for AI fintech. (FinTech Magazine)

Interested in attending Compliance Week 2026? Click here for information and Registration. Listeners to this podcast receive a 20% discount on the event. Use the Registration Code TOMFOX 20

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Daily Compliance News

Daily Compliance News: April 20, 2026, The ABC is Good Politics Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Anti-bribery isn’t just good business, it’s good politics. (TNR)
  • The bears ate my car. (NYT)
  • TACO caves in on Anthropic. (WSJ)
  • Deutsche Bank reports more potential Russian sanction violations. (FT)

Interested in attending Compliance Week 2026? Click here for information and Registration. Listeners to this podcast receive a 20% discount on the event. Use the Registration Code TOMFOX 20

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
FCPA Compliance Report

FCPA Compliance Report: Vince Walden on AI, Digital Assistants, and ROI at Compliance Week 2026

In this episode, Tom Fox welcomes Vince Walden, President of konaAI, to discuss his two panels at Compliance Week 2026 and the state of AI in compliance.

For the panel on AI and the compliance workforce, Vince argues jobs are generally safe because AI is best deployed as “digital assistants” (not digital employees) that handle repetitive tasks like data pulls and third-party due diligence, while keeping the “expert in the loop,” and he plans to show real use-case examples. For the ROI panel, Vince and co-panelists will discuss measuring impact through productivity gains, cost savings, faster turnaround for due diligence, and expanded compliance capabilities such as culture assessments, training, and transaction monitoring. Vince also links AI analytics to detecting fraud, waste, and abuse, citing a potential $35 million vendor abuse recovery, and explains why Compliance Week remains a top conference for regulator and peer benchmarking.

Key highlights:

  • AI Workforce
  • Digital Assistants in Action
  • Measuring Compliance ROI
  • Fraud Waste Abuse
  • Affordable Analytics Wins
  • Why Attend Compliance Week

Resources:

Vince Walden on LinkedIn

konaAI

Compliance Week 2026, click here for information and Registration

Listeners to this podcast receive a 20% discount on the event. Use the Registration Code TOMFOX 20

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Blog

AI Concentration Risk: A New Third-Party and Operational Resilience Challenge for Compliance

For years, concentration risk was treated as someone else’s problem. Procurement is worried about sole-source vendors. Treasury worried about counterparty exposure. Supply chain teams worried about bottlenecks. Compliance, by contrast, often sat one step removed from those conversations. In the age of enterprise AI, that separation no longer works.

Today, AI concentration risk is a front-line compliance issue. When a company’s most important AI-enabled processes depend on a small number of cloud providers, model vendors, chip suppliers, or geographic regions, that dependency is not merely an operational detail. It is a governance decision. And when that dependency is not identified, documented, tested, and managed, it becomes evidence of weak oversight that regulators and prosecutors understand very well.

That is why Chief Compliance Officers (CCOs) need to move AI concentration risk out of the technology silo and into the compliance program. This is not simply about resilience. It is about whether the company can demonstrate, under the DOJ’s Evaluation of Corporate Compliance Programs (ECCP), that it has identified a material risk, assigned ownership, designed controls, tested those controls, and escalated what matters. In other words, AI concentration risk is now a test of whether governance is real.

Why AI Concentration Risk Belongs in Compliance

At its core, AI concentration risk arises when a company becomes overly dependent on a small number of external providers, infrastructure layers, or geographic regions to support key AI-enabled operations. This is a classic third-party risk problem because it involves reliance on outside parties for critical services. It is also an operational resilience problem because a failure at one of those chokepoints can disrupt business continuity, customer commitments, internal reporting, investigations, monitoring, or other compliance-relevant functions.

For compliance professionals, that should sound familiar. The ECCP has long required companies to identify their risk universe, tailor controls accordingly, allocate resources to higher-risk areas, and continuously assess whether those controls are working in practice. The DOJ asks whether compliance programs are well designed, adequately resourced, empowered to function effectively, and tested for real-world performance. AI concentration risk fits squarely within that framework.

If your company relies on a single model provider for third-party screening, a single cloud region for transaction monitoring, or a single AI vendor for investigation triage, then a disruption is not simply an IT problem. It may affect the company’s ability to prevent misconduct, detect red flags, escalate allegations, and maintain reliable controls. If management cannot explain those dependencies and cannot show what has been done to mitigate them, that is evidence of under-governance.

The ECCP as the Primary Lens

The ECCP provides a highly practical framework for thinking about AI concentration risk by forcing compliance professionals to ask implementation questions rather than merely conceptual ones.

  1. Has your company conducted a risk assessment that includes AI dependency and concentration? Many organizations assess AI bias, privacy, and cybersecurity risk, but far fewer assess whether a small number of vendors represent single points of failure.
  2. Has your company translated that risk assessment into policies, procedures, and controls? It is not enough to know that dependency exists. The compliance question is whether there are controls in place for vendor onboarding, backup arrangements, portability, incident escalation, contractual protections, and contingency planning.
  3. Have those controls been tested? The ECCP is clear that paper programs are not enough. A company needs to know whether its controls function in practice. If there is a multi-cloud failover plan or an alternate-model runbook, has it actually been exercised?
  4. Has ownership been assigned? The DOJ repeatedly focuses on accountability. Someone must own the risk, someone must own the mitigation plan, and someone must report it to leadership.
  5. Is there evidence? Under the ECCP, documentation matters because it shows that a company did not merely talk about governance but operationalized it. In the AI context, this means inventories, risk rankings, contracts, testing logs, escalation protocols, incident reviews, and committee reporting. It is still Document Document Document.

Where Compliance Should Look First

For CCOs, the best way to begin is to map AI concentration risk across three layers.

The first is the infrastructure layer. Which GPU, accelerator, or compute providers support the organization’s most important AI functions? Is there heavy dependence on a single supplier or downstream foundry chain? Even if compliance does not make technical decisions, it should understand whether there is material operational exposure concentrated in a single location.

The second is the cloud and hosting layer. Which cloud providers and regions support production AI workloads? Are critical applications concentrated in one geography or one platform? Have failover and disaster recovery been tested, or are they merely theoretical?

The third is the model and application layer. Which model vendors, API providers, or AI-enabled workflow tools sit inside key business processes? Here is where the third-party risk lens becomes especially important. If one provider supports sanctions screening, hotline triage, policy search, transaction monitoring, or investigation workflows, the disruption risk is directly relevant to compliance effectiveness.

This is where a CCO should work closely with procurement, legal, IT, enterprise risk, and internal audit. The goal is not to take over technology governance. The goal is to ensure that AI concentration risk is incorporated into the company’s existing compliance and third-party risk architecture.

Building Practical Controls

Your approach should be practical and programmatic. First, start with inventory and classification. You cannot govern what you have not identified. Compliance should push for an inventory of AI use cases and the vendors, cloud environments, and model providers that support them. Those use cases should then be tiered based on business criticality, regulatory sensitivity, and operational dependency.

Next, update third-party due diligence. Traditional diligence questions around financial stability, security, and legal compliance remain important, but AI vendors should also be assessed for concentration-related risks. Can data and workflows be ported? Are there fallback options? What are the provider’s subcontracting dependencies? What audit rights exist? How are outages escalated?

Then move to contract design. This is where many compliance programs can add real value. Contracts should address incident notification, business continuity, data export, transition assistance, audit rights, service levels, and escalation expectations. Where concentration is likely to become significant, enhanced contractual protections should be mandatory.

After that, build contingency runbooks. If a model provider becomes unavailable, what happens? If a cloud region goes down, how quickly can key compliance processes be rerouted? If a vendor changes pricing or access terms, what is the escalation path? These runbooks should be documented, assigned to owners, and tested.

Finally, establish escalation thresholds. Governance is strongest when the company decides in advance what degree of concentration requires mitigation. For example, if more than half of a key compliance workflow depends on a single external provider, that may trigger a review by the board or executive committee. If a single region hosts a material portion of compliance-critical AI activity, failover testing may become mandatory.

Where NIST AI RMF and ISO/IEC 42001 Help

This is where the NIST AI Risk Management Framework and ISO/IEC 42001 become highly valuable for compliance officers. They help translate high-level concern into disciplined governance.

The NIST AI RMF emphasizes the Govern, Map, Measure, and Manage phases. That structure is especially useful here. Governance means assigning responsibility and setting risk appetite. Mapping means identifying where concentration exists and which business processes depend on it. Measuring means assessing the degree of dependency and resilience. Managing means putting in place mitigation, monitoring, and response mechanisms.

ISO/IEC 42001 adds an equally important management system discipline. It pushes organizations to define roles, document controls, monitor performance, conduct periodic reviews, and drive continual improvement. In other words, it helps turn AI governance into an operating system rather than a one-time project.

For compliance professionals, the lesson is clear. Use ECCP to define what effectiveness and accountability should look like. Use NIST AI RMF to structure the risk analysis. Use ISO 42001 to embed the resulting controls into a repeatable management process.

Proof of Governance in the AI Era

The deeper point is that AI concentration risk is no longer a hidden architecture issue. It is a test of whether the compliance function can help the enterprise identify dependencies before they fail. Under the ECCP, regulators are not simply asking whether a company had good intentions. They are asking whether it identified real risks, assigned responsibility, implemented controls, tested those controls, and learned from experience.

That is why AI concentration risk matters so much. It reveals whether the company understands how fragile its AI-enabled processes may be. It reveals whether third-party governance is keeping up with technological dependence. And it reveals whether compliance is engaged early enough to shape resilience rather than merely respond to disruption.

For the modern CCO, this is not a niche issue. It is a live example of how compliance adds value by helping the company operationalize governance before a crisis arrives.

Conclusion

In the end, AI concentration risk is not about servers, chips, or software contracts. It is about whether a company understands its vulnerabilities and has the discipline to govern them before they become failures. That is the heart of modern compliance. The issue is not whether disruption will come. The issue is whether your organization has done the hard work in advance to map dependency, build resilience, assign accountability, and prove that its controls can hold under pressure.

That is why this issue belongs squarely on the CCO’s agenda. Under the ECCP, a company must do more than claim it takes risk seriously. It must show its work. It must show that it identified the risk, assessed it, built controls around it, tested those controls, and updated them as the business evolved. The NIST AI Risk Management Framework and ISO/IEC 42001 help provide the structure. But the real challenge, and the real opportunity, belongs to compliance.

Because in the AI era, concentration risk is not merely a technical fragility. It is a governance signal. And the companies that can identify it, manage it, and document it will not only be more resilient. They will be able to demonstrate something even more valuable: that their compliance program is working exactly as it should.