Categories
Blog

Trust Is Not a Control: The Drop-In AI Audit

There is a hard truth at the center of modern AI governance that every compliance professional needs to confront: trust is not a control. For too long, organizations have approached AI oversight with a familiar but outdated mindset. They collect a vendor certification. They review a policy statement. They ask whether a third party is “aligned” with a recognized framework. Then they move on, assuming the governance box has been checked. In today’s enforcement and risk environment, that approach is no longer good enough.

The Department of Justice has repeatedly made this point in its Evaluation of Corporate Compliance Programs. The DOJ does not ask whether a company has a policy on paper. It asks whether the program is well designed, whether it is applied earnestly and in good faith, and, most importantly, whether it works in practice. That final phrase matters. Works in practice. It is the dividing line between performative governance and effective governance.

That is why every compliance program now needs a drop-in AI audit. It is not simply another diligence exercise. It is a mechanism for proving that governance is real. It is a practical third-party risk tool. And it is one of the clearest ways to operationalize the ECCP in the age of artificial intelligence.

The Problem: Third-Party AI Risk Is Moving Faster Than Oversight

Most companies do not build every AI capability internally. They rely on vendors, service providers, cloud platforms, embedded applications, analytics partners, and other third parties whose tools increasingly shape business processes and compliance outcomes. In many organizations, these third parties now influence investigations, due diligence, monitoring, onboarding, reporting, customer interactions, and internal decision-making. That creates a new class of third-party risk.

The problem is not only whether a vendor has responsible AI language in its contract or whether it can point to a certification. The problem is whether your organization can verify that the relevant controls are functioning as represented in the real-world use case affecting your business. That is where too many compliance programs still fall short.

Under the ECCP, the DOJ asks whether a company’s risk assessment is updated and informed by lessons learned. It asks whether the company has a process for managing risks presented by third parties. It asks whether controls have been tested, whether data is available to compliance personnel, and whether the company can demonstrate continuous improvement. These are not abstract questions. They go directly to how you oversee AI-enabled third parties. If your third-party AI governance begins and ends with a questionnaire and a PDF certification, you do not have evidence of governance. You have evidence of intake.

What a Drop-In Audit Really Does

A drop-in AI audit changes the question from “What does the third party say?” to “What can the third party prove?” That is a profound shift.

The value of the drop-in audit is that it brings compliance discipline directly into third-party AI oversight. Instead of accepting broad claims about safety, control, and alignment, you examine operational evidence. Instead of relying solely on design statements, you test for performance in practice. Instead of treating governance as a one-time approval event, treat it as a repeatable audit process. In that sense, the drop-in audit becomes proof of governance.

It also becomes a far more mature third-party risk tool. You are no longer merely assessing whether a vendor appears sophisticated. You are assessing whether a third party can withstand scrutiny on the questions that matter most: scope, controls, traceability, escalation, and evidence.

And from an ECCP perspective, that is precisely the point. The DOJ has emphasized that compliance programs must move beyond paper design into operational reality. A drop-in audit is one of the few mechanisms that let you do that in a disciplined, documentable way.

From Vendor Oversight to Third-Party Governance

This discipline should not be limited only to classic vendors. The better view is to expand the concept across all third parties that provide, influence, host, or materially shape AI-enabled services. That includes software providers, outsourced service partners, embedded AI functionality in enterprise tools, cloud-based analytics environments, compliance technology vendors, and any external party whose systems affect business-critical decisions or regulated processes.

Risk does not care about the label on the contract. If the third party’s AI affects your organization’s screening, monitoring, investigations, decision support, or disclosures, the compliance risk is real. Your governance process must be equally real. This is why “trust but verify” is no longer just a slogan. It is a design principle for third-party oversight of AI.

The Core Elements of the Drop-In Audit

A strong drop-in audit has three features: sampling, contradiction testing, and escalation.

1. Sampling: Evidence of Operation, Not Merely Design

Sampling is where governance becomes tangible. A company requests specific artifacts tied to actual use cases and actual control operations. This may include scope documents, Statements of Applicability, system documentation, training data summaries, access controls, incident records, runtime logs, or evidence of human review. The point is simple: operational evidence is what matters.

This is where a compliance function moves from hearing about controls to seeing them in action. It is also where internal audit can add real value by testing whether the evidence supports the stated control environment.

2. Contradiction Testing: Where Real Risk Emerges

This is one of the most important and underused concepts in third-party AI oversight. Inconsistencies between claims and reality are where governance failures emerge. If a third party says its certification covers a given service, does the scope document confirm it? If it claims strong incident response, does the record back it up? If it represents strong human oversight, do the runtime traces show meaningful intervention or only theoretical review points?

Contradiction testing is powerful because it goes to credibility. It tests whether the governance narrative matches the operating reality. Under the ECCP, that is exactly the kind of inquiry prosecutors and regulators will care about. It speaks to effectiveness, honesty, and control discipline.

3. Escalation: Governance in Action

Governance without consequences is not governance. A drop-in audit must include clear escalation triggers. Missing evidence, mismatched certification scope, unexplained gaps, unresolved incidents, or inconsistent remediation should not be noted in isolation. They should trigger action.

That action may include enhanced diligence, contractual remediation, independent validation, temporary use restrictions, or deeper audit review. The important point is that the program responds. This is where the drop-in audit becomes operationalizing the ECCP. It demonstrates that the company not only identifies risk but also acts on it.

How the Drop-In Audit Maps to the ECCP

The drop-in audit aligns tightly with the DOJ’s framework for an effective compliance program. Risk assessment is addressed because the audit focuses attention on where AI-enabled third parties create actual operational and control exposure. Policies and procedures are tested because the company does not merely accept them at face value. It assesses whether the stated controls are supported by evidence. Third-party management is strengthened by making oversight continuous, risk-based, and verifiable. Testing and continuous improvement are built into the audit process, which identifies gaps, contradictions, and corrective actions. Investigation and remediation principles are reinforced by documenting, escalating, and using findings to improve the control environment.

Most importantly, the audit answers the ECCP’s central practical question: Does the program work in practice?

How the Drop-In Audit Maps to NIST AI RMF

The NIST AI Risk Management Framework provides a highly useful structure for the drop-in audit, especially through its Govern, Map, Measure, and Manage functions.

  1. Governance is reflected in defined ownership, accountability, and escalation when issues are identified.
  2. A map is reflected in understanding the third party’s actual AI use case, scope, dependencies, and business impact.
  3. The measure is reflected in the use of evidence, runtime observations, contradiction testing, and performance assessment.
  4. Management is reflected in remediation, ongoing oversight, and updates to controls based on audit findings.

In this way, the drop-in audit becomes a practical tool for taking the NIST AI RMF from concept to execution.

How the Drop-In Audit Maps to ISO/IEC 42001

ISO/IEC 42001 adds the management-system discipline that compliance programs need. Its value lies in documented scope, role clarity, control applicability, monitoring, corrective action, and continual improvement. A drop-in audit fits naturally into that structure because it tests whether those elements are visible in operation, not merely stated in documentation.

The Statement of Applicability becomes meaningful when the company verifies that the controls identified there actually correspond to the deployed service. Monitoring becomes meaningful when evidence is examined. Corrective action becomes meaningful when gaps trigger follow-up. Continual improvement becomes meaningful when findings are fed back into governance. That is why the documentation you generate should serve your board, regulators, and internal stakeholders without additional work. Producing evidence that travel is one of the most strategic benefits of this approach.

Why Every Compliance Program Needs This Now

The strategic payoff is straightforward. Strong AI governance is not a drag on innovation. It is what allows innovation to scale with trust. A drop-in audit gives compliance and internal audit a mechanism to test what matters, document their findings, and create evidence that withstands scrutiny. It moves governance from assertion to proof. It transforms third-party diligence into a repeatable, auditable process. It helps ensure that when regulators, boards, or business leaders ask how the company knows its third-party AI governance is working, there is a real answer.

Because, in the end, evidence of governance matters. Not narratives. Not slide decks. Evidence. President Reagan was right in the 1980s, and he is still right today: “Trust but verify.”