Categories
Blog

AI Concentration Risk: A New Third-Party and Operational Resilience Challenge for Compliance

For years, concentration risk was treated as someone else’s problem. Procurement is worried about sole-source vendors. Treasury worried about counterparty exposure. Supply chain teams worried about bottlenecks. Compliance, by contrast, often sat one step removed from those conversations. In the age of enterprise AI, that separation no longer works.

Today, AI concentration risk is a front-line compliance issue. When a company’s most important AI-enabled processes depend on a small number of cloud providers, model vendors, chip suppliers, or geographic regions, that dependency is not merely an operational detail. It is a governance decision. And when that dependency is not identified, documented, tested, and managed, it becomes evidence of weak oversight that regulators and prosecutors understand very well.

That is why Chief Compliance Officers (CCOs) need to move AI concentration risk out of the technology silo and into the compliance program. This is not simply about resilience. It is about whether the company can demonstrate, under the DOJ’s Evaluation of Corporate Compliance Programs (ECCP), that it has identified a material risk, assigned ownership, designed controls, tested those controls, and escalated what matters. In other words, AI concentration risk is now a test of whether governance is real.

Why AI Concentration Risk Belongs in Compliance

At its core, AI concentration risk arises when a company becomes overly dependent on a small number of external providers, infrastructure layers, or geographic regions to support key AI-enabled operations. This is a classic third-party risk problem because it involves reliance on outside parties for critical services. It is also an operational resilience problem because a failure at one of those chokepoints can disrupt business continuity, customer commitments, internal reporting, investigations, monitoring, or other compliance-relevant functions.

For compliance professionals, that should sound familiar. The ECCP has long required companies to identify their risk universe, tailor controls accordingly, allocate resources to higher-risk areas, and continuously assess whether those controls are working in practice. The DOJ asks whether compliance programs are well designed, adequately resourced, empowered to function effectively, and tested for real-world performance. AI concentration risk fits squarely within that framework.

If your company relies on a single model provider for third-party screening, a single cloud region for transaction monitoring, or a single AI vendor for investigation triage, then a disruption is not simply an IT problem. It may affect the company’s ability to prevent misconduct, detect red flags, escalate allegations, and maintain reliable controls. If management cannot explain those dependencies and cannot show what has been done to mitigate them, that is evidence of under-governance.

The ECCP as the Primary Lens

The ECCP provides a highly practical framework for thinking about AI concentration risk by forcing compliance professionals to ask implementation questions rather than merely conceptual ones.

  1. Has your company conducted a risk assessment that includes AI dependency and concentration? Many organizations assess AI bias, privacy, and cybersecurity risk, but far fewer assess whether a small number of vendors represent single points of failure.
  2. Has your company translated that risk assessment into policies, procedures, and controls? It is not enough to know that dependency exists. The compliance question is whether there are controls in place for vendor onboarding, backup arrangements, portability, incident escalation, contractual protections, and contingency planning.
  3. Have those controls been tested? The ECCP is clear that paper programs are not enough. A company needs to know whether its controls function in practice. If there is a multi-cloud failover plan or an alternate-model runbook, has it actually been exercised?
  4. Has ownership been assigned? The DOJ repeatedly focuses on accountability. Someone must own the risk, someone must own the mitigation plan, and someone must report it to leadership.
  5. Is there evidence? Under the ECCP, documentation matters because it shows that a company did not merely talk about governance but operationalized it. In the AI context, this means inventories, risk rankings, contracts, testing logs, escalation protocols, incident reviews, and committee reporting. It is still Document Document Document.

Where Compliance Should Look First

For CCOs, the best way to begin is to map AI concentration risk across three layers.

The first is the infrastructure layer. Which GPU, accelerator, or compute providers support the organization’s most important AI functions? Is there heavy dependence on a single supplier or downstream foundry chain? Even if compliance does not make technical decisions, it should understand whether there is material operational exposure concentrated in a single location.

The second is the cloud and hosting layer. Which cloud providers and regions support production AI workloads? Are critical applications concentrated in one geography or one platform? Have failover and disaster recovery been tested, or are they merely theoretical?

The third is the model and application layer. Which model vendors, API providers, or AI-enabled workflow tools sit inside key business processes? Here is where the third-party risk lens becomes especially important. If one provider supports sanctions screening, hotline triage, policy search, transaction monitoring, or investigation workflows, the disruption risk is directly relevant to compliance effectiveness.

This is where a CCO should work closely with procurement, legal, IT, enterprise risk, and internal audit. The goal is not to take over technology governance. The goal is to ensure that AI concentration risk is incorporated into the company’s existing compliance and third-party risk architecture.

Building Practical Controls

Your approach should be practical and programmatic. First, start with inventory and classification. You cannot govern what you have not identified. Compliance should push for an inventory of AI use cases and the vendors, cloud environments, and model providers that support them. Those use cases should then be tiered based on business criticality, regulatory sensitivity, and operational dependency.

Next, update third-party due diligence. Traditional diligence questions around financial stability, security, and legal compliance remain important, but AI vendors should also be assessed for concentration-related risks. Can data and workflows be ported? Are there fallback options? What are the provider’s subcontracting dependencies? What audit rights exist? How are outages escalated?

Then move to contract design. This is where many compliance programs can add real value. Contracts should address incident notification, business continuity, data export, transition assistance, audit rights, service levels, and escalation expectations. Where concentration is likely to become significant, enhanced contractual protections should be mandatory.

After that, build contingency runbooks. If a model provider becomes unavailable, what happens? If a cloud region goes down, how quickly can key compliance processes be rerouted? If a vendor changes pricing or access terms, what is the escalation path? These runbooks should be documented, assigned to owners, and tested.

Finally, establish escalation thresholds. Governance is strongest when the company decides in advance what degree of concentration requires mitigation. For example, if more than half of a key compliance workflow depends on a single external provider, that may trigger a review by the board or executive committee. If a single region hosts a material portion of compliance-critical AI activity, failover testing may become mandatory.

Where NIST AI RMF and ISO/IEC 42001 Help

This is where the NIST AI Risk Management Framework and ISO/IEC 42001 become highly valuable for compliance officers. They help translate high-level concern into disciplined governance.

The NIST AI RMF emphasizes the Govern, Map, Measure, and Manage phases. That structure is especially useful here. Governance means assigning responsibility and setting risk appetite. Mapping means identifying where concentration exists and which business processes depend on it. Measuring means assessing the degree of dependency and resilience. Managing means putting in place mitigation, monitoring, and response mechanisms.

ISO/IEC 42001 adds an equally important management system discipline. It pushes organizations to define roles, document controls, monitor performance, conduct periodic reviews, and drive continual improvement. In other words, it helps turn AI governance into an operating system rather than a one-time project.

For compliance professionals, the lesson is clear. Use ECCP to define what effectiveness and accountability should look like. Use NIST AI RMF to structure the risk analysis. Use ISO 42001 to embed the resulting controls into a repeatable management process.

Proof of Governance in the AI Era

The deeper point is that AI concentration risk is no longer a hidden architecture issue. It is a test of whether the compliance function can help the enterprise identify dependencies before they fail. Under the ECCP, regulators are not simply asking whether a company had good intentions. They are asking whether it identified real risks, assigned responsibility, implemented controls, tested those controls, and learned from experience.

That is why AI concentration risk matters so much. It reveals whether the company understands how fragile its AI-enabled processes may be. It reveals whether third-party governance is keeping up with technological dependence. And it reveals whether compliance is engaged early enough to shape resilience rather than merely respond to disruption.

For the modern CCO, this is not a niche issue. It is a live example of how compliance adds value by helping the company operationalize governance before a crisis arrives.

Conclusion

In the end, AI concentration risk is not about servers, chips, or software contracts. It is about whether a company understands its vulnerabilities and has the discipline to govern them before they become failures. That is the heart of modern compliance. The issue is not whether disruption will come. The issue is whether your organization has done the hard work in advance to map dependency, build resilience, assign accountability, and prove that its controls can hold under pressure.

That is why this issue belongs squarely on the CCO’s agenda. Under the ECCP, a company must do more than claim it takes risk seriously. It must show its work. It must show that it identified the risk, assessed it, built controls around it, tested those controls, and updated them as the business evolved. The NIST AI Risk Management Framework and ISO/IEC 42001 help provide the structure. But the real challenge, and the real opportunity, belongs to compliance.

Because in the AI era, concentration risk is not merely a technical fragility. It is a governance signal. And the companies that can identify it, manage it, and document it will not only be more resilient. They will be able to demonstrate something even more valuable: that their compliance program is working exactly as it should.

Categories
Compliance Tip of the Day

Compliance Tip of the Day – Compliance Terms and Conditions

Welcome to “Compliance Tip of the Day,” the podcast that brings you daily insights and practical advice for navigating the ever-evolving landscape of compliance and regulatory requirements. Whether you’re a seasoned compliance professional or just starting your journey, we aim to provide you with bite-sized, actionable tips to help you stay on top of your compliance game. Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law. Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

This week, we are reviewing the third-party risk management process. We conclude our look at the compliance terms and conditions you should include in your 3rd-party contracts.

For more on this topic, check out The Compliance Handbook: A Guide to Operationalizing your Compliance Program, 6th edition, which LexisNexis recently released. It is available here.

Categories
Compliance Tip of the Day

Compliance Tip of the Day – Top 4 Compliance Internal Controls

Welcome to “Compliance Tip of the Day,” the podcast that brings you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements. Whether you’re a seasoned compliance professional or just starting your journey, our goal is to provide you with concise, actionable tips to help you stay ahead in your compliance efforts. Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law. Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

Today, we outline the top four internal compliance controls. They are:

i. DOA

ii. Vendor Master File

iii. Contracts with 3rd parties

iv. Distribution of Funds and Movements of Currency

For more information on this topic, refer to The Compliance Handbook: A Guide to Operationalizing Your Compliance Program, 6th edition, recently released by LexisNexis. It is available ⁠here⁠

Categories
The ESG Report

Legal Contracts for ESG with Sarah Dadush and David Snyder

Tom Fox welcomes  to this episode of the ESG Report. They are both law professors with backgrounds in human rights. In this conversation, they join Tom Fox to talk about the role of contracting in ESG.

Robust Supplier Codes of Conduct

Tom asks what steps are being taken to build more robust contract clauses. David explains that the process is still fairly in its initial state. Business lawyers have only recently adopted policies against forced labor and child labor. Lawyers are advising their clients to sign on to these policies, which is only one of the first few steps. Getting them implemented, however, is the true challenge. “The policies sit there in the corporate minutes, and unless they’re in the contracts, they’re not going to be implemented,” David says. These policies need to be in operation.  “To get them implemented, to get them operationalized, they need to be in the contracts.”

 

Human Rights, Model Clauses & ESG

“Part of the history of ESG is focusing on equipping consumers to make choices that are more and more aligned with their values,” Sarah tells Tom. This has expanded to include not only consumers but investors, thus bringing in more money and leverage to influence corporate behavior. The S in ESG comes into play with model clauses because it looks at human rights and employee rights. “Our focus within the model contract laws is on worker protection,” Sarah remarks. “We tend to think often of things like child labor, trafficked labor, forced labor in various shades. What we are including or addressing specifically in the model contract laws is worker conditions.”

 

Model Clauses & Regulatory Obligations 

Tom asks if model clauses can help companies meet their regulatory requirements. With model contract clauses in place, human rights due diligence are going to be more effective, David and Sarah agree. “They show the regulators that you are serious about doing something about this,” David remarks. However, model contracts need to be put into place. If they are signed but not acted upon, all you have is paper. “Once you’ve agreed to this human rights due diligence or a due diligence regime, and then we also have clauses about sharing information and generating documentation, then you are going to be able to document what you have done,” David adds. 

Sharing information will result in communication and documentation of what’s going on at the company. 

 

Resources

Sarah Dadush | LinkedIn 

David Snyder | LinkedIn

 

Categories
The ESG Compliance Podcast

Contracting for ESG with Sarah Dadush and David Snyder


Attorneys and professors in law David Snyder and Sarah Dadush join the podcast to discuss the role of contracting in ESG, how a conventional approach to writing contracts may not be the best choice, which issues are fixed, and how accountability should be on both the buyer and supplier.
▶️ Contracting for ESG with David Snyder and Sarah Dadush:
Key points discussed in the episode:
✔️ Supply chains are doing enough for ESG compliance. David Snyder and Sarah Dadush aim to combat this with more effective measures.
✔️ Policies remain unimplemented if they aren’t in the contract. Having a supplier code of conduct written with the assistance of a business lawyer isn’t enough to create change.
✔️ Working at an oil refinery helped David Snyder learn the true culprit – organized crime. He wanted human and environmental efforts to be treated the same way as product manufacturing.
✔️ ESG can impact both consumer and investor decisions. The California Supply Chains Transparency Act pushed for full disclosure directed at the customers.
✔️ Focusing only on forced labor leaves out other problems.
✔️ Traditional approaches to contracting ESG don’t work, Sarah Dadush says. Not only does it aggravate human rights risks but also increases the company’s chances of legal violations.
✔️ David Snyder emphasizes the importance of risk as part of supply chain management and compliance obligations. Lawyers should also play their part in handling clients properly instead of resorting to risk shifting.
✔️ Contracts don’t fix all supply chain issues. It all boils down to supply chain resilience. A weaker foundation puts companies in greater danger, especially in times of difficulty like the COVID-19 pandemic.
✔️ Buyers should be responsible when exiting contracts. Contracts have been misused at the height of the pandemic, and consumers are now urging businesses to be accountable for their shortcomings.
David Snyder was appointed professor of law at the American University Washington College of Law in the fall of 2007 and was appointed director of the Business Law Program in 2008. During 2021-2022, he also holds a Fernand Braudel Senior Fellowship at the European University Institute (Florence). He graduated summa cum laude from Tulane University Law School in 1991, and he has been a professor of law at Tulane, Indiana (Bloomington), and Cleveland-Marshall College of Law. He has been a regular visiting professor at the law school of the University of Paris II (Panthéon-Assas) since 2012, and has also been a visiting professor at the University of Paris 10 (Nanterre La Défense), Boston University, and the College of William and Mary. In addition, he has taught summer courses at the University of Mainz (Germany). After graduating from law school, Professor Snyder served as a law clerk to the Honorable John M. Duhé Jr. of the United States Court of Appeals for the Fifth Circuit, and subsequently joined the D.C. firm of Hogan & Hartson (now Hogan Lovells). In 2014 Professor Snyder was awarded a MacCormick Fellowship during which he delivered the annual Wilson Memorial Lecture at the University of Edinburgh.
Sarah Dadush’s research lies at the intersection of business and human rights. Her scholarship explores various innovative legal mechanisms for improving the social and environmental performance of multinational corporations. She directs the Law School’s newly-established Business & Human Rights Law Program and co-leads an ABA Business Law Section Working Group that has developed a comprehensive toolkit for upgrading international supply contracts to better protect workers’ human rights.
—————————————————————————-
Do you have a podcast (or do you want to)? Join the only network dedicated to compliance, risk management, and business ethics, the Compliance Podcast Network. For more information, contact Tom Fox at tfox@tfoxlaw.com.