Trust and Verify: How Compliance Can Harness AI Agents Safely

Ed. Note: This week, we present a week-long series on the use of GenAI in a best practices compliance program. Additionally, for each blog post, I have created a one-page checklist for each article that you can use in presentations or for easier reference. Email my EA Jaja at jaja@compliancepodcastnetwork.net for a complimentary copy.

When we think of “trust” in compliance, our minds usually go to whistleblowers, employees, or third parties. But increasingly, the question of trust must extend to a new category of actors: AI agents.

As Blair Levin and Larry Downes explain in their provocative Harvard Business Review piece, titled “Can AI Agents Be Trusted?“, AI agents are not just smarter chatbots. They are software systems that can collect data, make decisions, and even act autonomously based on rules and priorities. For compliance professionals, this changes the game. If AI agents can act on our behalf, can they also be trusted to uphold compliance principles?

The answer is yes, but only if we design and monitor them with the same rigor that we apply to employees, third parties, and business partners. Today, we look at five key takeaways from their article to guide compliance professionals in building AI agents into trustworthy components of their programs.

1. Trust Requires Oversight, Just as with Human Agents

The article makes a simple but powerful analogy: think of an AI agent the way you would think of an employee or contractor. Before delegating sensitive responsibilities, you conduct background checks, put controls in place, and possibly even require bonding. The same must hold for AI.

For compliance, this means creating oversight structures before deploying agents into live workflows. If your compliance AI assistant can monitor transactions for red flags, you must ensure that a human compliance officer reviews its outputs. If it can escalate potential whistleblower complaints, you must validate that escalation logic against regulatory requirements.

AI oversight also means testing for vulnerabilities. As Levin and Downes note, AI agents are susceptible to hacking, manipulation, and even misinformation. Compliance should require penetration testing of any agent integrated into company systems, just as IT would test network defenses.

Trust is never blind in compliance. It is built on verification, monitoring, and accountability. AI agents can and should be trusted, but only when they operate within a compliance framework that mirrors the controls we already use for human agents.

2. Recognize and Manage Bias and Conflicts of Interest

One of the major risks highlighted in the article is bias, whether introduced by marketers, advertisers, or flawed training data. Just as a conflicted employee can steer decisions for personal gain, an AI agent can be subtly manipulated to favor sponsors, advertisers, or even certain viewpoints.

For compliance professionals, this should raise alarms. Imagine an AI agent used for third-party due diligence. If biased data shapes its recommendations, you could end up onboarding a high-risk vendor while rejecting a low-risk one. Worse, if regulators discover that your system relied on biased algorithms, you’ll face serious questions about program effectiveness.

The solution is conflict-of-interest monitoring for AI. Just as employees must disclose outside interests, AI agents should be tested and audited for hidden preferences. Compliance should insist on transparency from vendors about training data sources and sponsorship arrangements. In some cases, contracts with AI providers may need explicit clauses guaranteeing independence from commercial influence.

Compliance has always been about spotting and mitigating conflicts. In the age of AI, that vigilance must extend to our digital agents. Only then can we claim that our programs are fair, impartial, and defensible.

3. Treat AI Agents as Fiduciaries of Compliance

Perhaps the most compelling insight from Levin and Downes is that AI agents should be treated as fiduciaries. Just as lawyers, trustees, and board members owe a heightened duty of care to their clients, AI agents entrusted with compliance responsibilities must be designed and governed under similar standards.

For compliance officers, this concept aligns directly with DOJ expectations. The Evaluation of Corporate Compliance Programs (2024 ECCP) emphasizes accountability, transparency, and independence. By treating AI agents as fiduciaries, compliance leaders can extend these principles to technology.

What does fiduciary duty look like in practice?

  • Obedience: AI must follow company policies and regulatory standards.
  • Loyalty: AI must prioritize the company’s compliance objectives over any hidden commercial interests.
  • Confidentiality: AI must protect sensitive compliance data from leaks or misuse.
  • Accountability: AI actions must be traceable, with clear logs and audit trails.

This fiduciary framing provides compliance professionals with a powerful tool. It not only reassures stakeholders that AI can be trusted, but it also sets a benchmark that regulators can understand and evaluate. In short, fiduciary AI is defensible AI.

4. Build Market and Insurance-Based Safeguards

The article notes that beyond regulation, market mechanisms such as insurance and independent oversight will be critical to ensuring AI trustworthiness. For compliance leaders, this presents both a risk management strategy and an opportunity.

Just as identity theft insurance evolved alongside online banking, AI liability insurance will likely become a standard corporate requirement. Compliance officers should begin engaging with insurers to explore coverage for AI-related risks, such as data leaks, wrongful denials of due diligence clearance, or biased decision-making.

Equally important are third-party oversight tools. The article envisions AI “credit bureaus” that could audit agent behavior, set decision thresholds, or freeze activity when risks escalate. For compliance, such independent monitoring could provide an external layer of assurance that your AI systems are behaving as intended.

The takeaway is clear: do not rely solely on internal controls. Pair them with market-based safeguards and external verification. Doing so not only strengthens trust in AI agents but also demonstrates to regulators that your program embraces both proactive and independent oversight.

5. Design for Data Security and Local Control

Finally, Levin and Downes stress the importance of keeping decisions local; that is, ensuring sensitive data stays on company-controlled devices and servers, rather than in external clouds. For compliance professionals, this echoes a familiar principle: control the data, control the risk.

Agentic AI, by definition, processes vast amounts of sensitive information. If compliance agents are reviewing hotline reports, transaction monitoring data, or due diligence files, any data leakage could be catastrophic. That’s why strong encryption, local processing, and secure enclaves are essential.

Compliance officers should demand that AI vendors support:

  • On-device or private cloud processing for sensitive tasks.
  • Encryption of all data in transit and at rest.
  • Independent verification of security claims by external auditors.
  • Full disclosure of sponsorships, promotions, and paid influences.

By designing AI agents with local control and transparency, compliance teams can build systems that are both effective and trustworthy. Data security is not just an IT concern; it is a compliance imperative.

Trust, But Never Blindly

AI agents hold immense potential for compliance programs. They can streamline monitoring, accelerate due diligence, and support real-time risk management. But as Levin and Downes remind us, they must also be carefully governed to prevent bias, manipulation, and misuse.

For compliance leaders, the path forward is to treat AI like any other agent (or channel your inner Ronald Reagan: trust, but verify. With oversight, fiduciary framing, market safeguards, and strong data controls, AI can become a trusted partner in compliance—one that strengthens, rather than weakens, the ethical fabric of the organization.

Leave a Reply

Your email address will not be published. Required fields are marked *

What are you looking for?