Categories
AI Today in 5

AI Today in 5: April 23, 2026 the AI Maga Influencer Edition

Welcome to AI Today in 5, the newest edition to the Compliance Podcast Network. Each day, I will bring to you 5 stories about AI stories to start your day. Sit back, enjoy a cup of morning coffee and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day we consider four stories from the business world, compliance, ethics, risk management, leadership or general interest about AI.

  1. Agentic AI reshaping bank compliance.(FinTechGlobal)
  2. Compliance first AI for AML. (FinTechGlobal)
  3. Monetizing AI and compliance as a service. (CRN)
  4. Using AI to personalize health care. (Forbes)
  5. Top MAGA influencer is AI created from India. (NYPost)

Interested in attending Compliance Week 2026, click here for information and Registration. Listeners to this podcast receive a 20% discount to the event. Use the Registration Code TOMFOX 20

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com

Categories
Blog

The 30-Day Shadow-AI Amnesty: Turning Hidden Risk into Governance

There is a hard truth that every Chief Compliance Officer and compliance professional needs to confront right now: artificial intelligence is already inside your organization, whether it arrived through formal approval channels or not.

Employees are testing tools independently. Business teams are adopting AI-enabled workflows without waiting for a governance committee to approve them. Vendors are embedding AI into products and services faster than many companies can update their policies. Somewhere inside that mix, decisions are being influenced by systems that may not be documented, reviewed, or governed in any meaningful way. That is the world of Shadow-AI.

It is not necessarily malicious. In many cases, it is simply the predictable result of innovation outpacing governance. But from a compliance perspective, that does not make it any less risky. Under the Department of Justice’s Evaluation of Corporate Compliance Programs, the question is not whether management intended to allow uncontrolled use of AI. The question is whether the company can identify emerging risks, implement controls that address them, encourage internal reporting, and demonstrate that the program works in practice.

That is why the 30-day Shadow-AI Amnesty matters. Properly designed, it is not an admission of failure. It is proof of governance. It is a practical mechanism for surfacing hidden risk, reinforcing a speak-up culture, and creating the operational baseline needed to govern AI over the long term.

You Cannot Govern What You Cannot See

The first challenge with Shadow-AI is visibility. Too many organizations still assume that AI risk begins with approved enterprise systems. That assumption is already outdated. The real risk universe is broader. It includes employees using public generative AI tools for drafts or analysis. It includes business units creating internal automations that affect workflows. It includes third-party applications with embedded AI functionality that have not been separately assessed. It includes pilots that started small and quietly became part of day-to-day decision-making.

This is exactly the sort of problem the ECCP is built to address. The DOJ asks whether a company’s risk assessment is dynamic and updated in light of lessons learned and changing business realities. Shadow-AI embodies the changing business reality. If your risk assessment fails to account for hidden AI use, your compliance program is lagging behind the business.

A 30-day amnesty closes that gap by creating a controlled mechanism to identify what is already happening. It allows the company to convert unknown risk into known risk and known risk into governable risk. In other words, it turns hidden risk into a governance advantage.

Why Amnesty Works Better Than Enforcement at the Start

One of the smartest features of a Shadow-AI Amnesty is that it begins with disclosure rather than punishment. If you want employees to report unapproved AI use, you need to give them a credible reason to come forward. If the first signal from compliance is that disclosure will trigger blame, discipline, or reputational harm, employees will remain silent. The result will be exactly the opposite of what the compliance function needs. This is where the amnesty becomes a culture-and-speak-up control.

The ECCP places significant emphasis on culture, internal reporting, and non-retaliation. Prosecutors are instructed to evaluate whether employees feel comfortable raising concerns and whether the company responds appropriately when they do. A well-structured amnesty aligns directly with those expectations because it tells employees that transparency is valued, that reporting is encouraged, and that remediation matters more than finger-pointing.

That does not mean there are no consequences for reckless or prohibited conduct. It means the organization recognizes that the first step toward control is visibility. The safe-harbor period exists to gather information, assess risk, and bring informal AI activity into a formal governance structure. That is not a weakness. That is smart compliance design.

Designing the Amnesty for Participation

The success of a Shadow-AI Amnesty depends heavily on its design. If the process is burdensome, legalistic, or overly technical, participation will be limited. The design principle should be simple: lower the barrier to disclosure while collecting enough information to support triage.

A short intake process is essential. Employees should be able to disclose a tool, workflow, or use case quickly. The company needs basic information: what the tool is, who owns it, where it is used, what data it touches, what decisions it may influence, and whether any controls already exist. This is not the stage for a full investigation. It is the stage for building inventory and context.

That approach is fully consistent with good governance practice. The NIST AI Risk Management Framework emphasizes understanding context, mapping use cases, and establishing governance for the actual use of AI. ISO/IEC 42001 similarly reflects the principle that effective AI management begins with a defined scope, documented processes, and clear responsibility. You cannot apply either framework if you do not know what systems or uses exist in the first place. The amnesty, then, is not a side exercise. It is the front door to a credible AI governance program.

Triage Is Where Governance Becomes Real

Once disclosures start coming in, the company must shift from intake to triage. This is where design and control become critical. Not every disclosed use of AI presents the same level of risk. Some uses may be low-risk productivity aids. Others may influence hiring, investigations, financial reporting, customer-facing communications, or core operational decisions. The compliance function needs a disciplined way to distinguish between them.

A risk-based triage model should ask a few straightforward questions. Does the AI influence a decision that affects employees, customers, or regulated outcomes? Does it involve sensitive or confidential data? Is there human review, or is the output used automatically? Is the use visible externally? Is it part of a business-critical workflow? What controls exist today?

These are compliance questions. They are also ECCP questions because they go directly to risk assessment, resource allocation, and whether controls are tailored to the realities of the business. This is also where culture and control begin to work together. A company that invites disclosure but fails to triage intelligently will lose credibility. Employees need to see that reporting leads to measured, thoughtful governance, not chaos. The point is not to shut everything down. The point is to classify, prioritize, and respond appropriately.

Culture as a Control

One of the most important themes in the modern compliance conversation is that culture is not soft. Culture is a control. That is especially true with Shadow-AI. In many organizations, the first people to know that a workflow has drifted outside approved channels are the employees using it every day. The first people to spot unreviewed prompts, risky data inputs, or overreliance on AI-generated outputs are often not senior executives or formal governance committees. They are line employees, managers, analysts, and business operators.

If those people do not believe they can report what they see without retaliation or embarrassment, then the organization loses one of its most effective early warning systems. A Shadow-AI Amnesty sends a powerful signal. It says the company would rather know than remain in the dark. It says that governance begins with honesty. It says that disclosure is part of doing the right thing.

Under the ECCP, that matters. A culture that encourages internal reporting and constructive remediation is a hallmark of an effective compliance program. In the AI context, it may be one of the few ways to surface emerging risks before they become control failures, regulatory issues, or public problems.

From Amnesty to Operating Model

The amnesty itself is only the beginning. Its true value lies in what follows. Once the company has a baseline inventory of disclosed AI uses, it should not let that information sit in a spreadsheet and die. The next step is to convert the amnesty into a long-term governance operating model.

That means maintaining a living registry of AI use cases. It means embedding disclosure and review into normal business processes. It means defining approval pathways for higher-risk uses. It means establishing ongoing monitoring to detect performance changes, data drift, and control effectiveness. It means updating policies, training, and communications based on what the company has actually learned from the amnesty.

This is where the governance frameworks become especially useful. NIST AI RMF helps organizations move from mapping and understanding AI uses to governing, measuring, and managing them. ISO/IEC 42001 provides the management-system discipline needed to assign responsibility, document controls, review performance, and drive continual improvement.

In other words, the amnesty is not the solution by itself. It is the catalyst that allows a real operating model to emerge.

Proof of Governance Under the ECCP

Why does this matter so much from an enforcement perspective? Because the amnesty produces evidence. If regulators ask how the company identified AI uses, there is a process. If they ask how risks were assessed, there is a methodology for it. If they ask what was done with high-risk cases, there are records of triage and remediation. If they ask what role culture played, there is a concrete speak-up initiative tied to internal reporting and governance design.

This is exactly what the ECCP is looking for. Not slogans. Not a glossy AI principles deck. Evidence that the company identified a risk, created a mechanism to surface it, encouraged reporting, evaluated what it found, and built controls that match the risk. That is why the 30-day Shadow-AI Amnesty is so important. It transforms governance from assertion into proof.

The Practical Bottom Line

The compliance function does not need to wait for a perfect enterprise AI strategy before acting. In fact, waiting may be the biggest mistake. Shadow-AI is already there. The question is whether your organization is prepared to see it, hear about it, and govern it.

A 30-day amnesty is one of the most practical tools available because it combines two things strong compliance programs need: better visibility and a stronger culture. It surfaces risk while reinforcing speak-up. It creates documentation while improving control design. It gives the company a starting point for long-term governance without pretending the problem can be solved in one month.

In the end, that is what good compliance has always done. It does not deny business reality. It creates the structure that allows the business to move forward with integrity, accountability, and confidence.

Categories
Blog

Trust Is Not a Control: The Drop-In AI Audit

There is a hard truth at the center of modern AI governance that every compliance professional needs to confront: trust is not a control. For too long, organizations have approached AI oversight with a familiar but outdated mindset. They collect a vendor certification. They review a policy statement. They ask whether a third party is “aligned” with a recognized framework. Then they move on, assuming the governance box has been checked. In today’s enforcement and risk environment, that approach is no longer good enough.

The Department of Justice has repeatedly made this point in its Evaluation of Corporate Compliance Programs. The DOJ does not ask whether a company has a policy on paper. It asks whether the program is well designed, whether it is applied earnestly and in good faith, and, most importantly, whether it works in practice. That final phrase matters. Works in practice. It is the dividing line between performative governance and effective governance.

That is why every compliance program now needs a drop-in AI audit. It is not simply another diligence exercise. It is a mechanism for proving that governance is real. It is a practical third-party risk tool. And it is one of the clearest ways to operationalize the ECCP in the age of artificial intelligence.

The Problem: Third-Party AI Risk Is Moving Faster Than Oversight

Most companies do not build every AI capability internally. They rely on vendors, service providers, cloud platforms, embedded applications, analytics partners, and other third parties whose tools increasingly shape business processes and compliance outcomes. In many organizations, these third parties now influence investigations, due diligence, monitoring, onboarding, reporting, customer interactions, and internal decision-making. That creates a new class of third-party risk.

The problem is not only whether a vendor has responsible AI language in its contract or whether it can point to a certification. The problem is whether your organization can verify that the relevant controls are functioning as represented in the real-world use case affecting your business. That is where too many compliance programs still fall short.

Under the ECCP, the DOJ asks whether a company’s risk assessment is updated and informed by lessons learned. It asks whether the company has a process for managing risks presented by third parties. It asks whether controls have been tested, whether data is available to compliance personnel, and whether the company can demonstrate continuous improvement. These are not abstract questions. They go directly to how you oversee AI-enabled third parties. If your third-party AI governance begins and ends with a questionnaire and a PDF certification, you do not have evidence of governance. You have evidence of intake.

What a Drop-In Audit Really Does

A drop-in AI audit changes the question from “What does the third party say?” to “What can the third party prove?” That is a profound shift.

The value of the drop-in audit is that it brings compliance discipline directly into third-party AI oversight. Instead of accepting broad claims about safety, control, and alignment, you examine operational evidence. Instead of relying solely on design statements, you test for performance in practice. Instead of treating governance as a one-time approval event, treat it as a repeatable audit process. In that sense, the drop-in audit becomes proof of governance.

It also becomes a far more mature third-party risk tool. You are no longer merely assessing whether a vendor appears sophisticated. You are assessing whether a third party can withstand scrutiny on the questions that matter most: scope, controls, traceability, escalation, and evidence.

And from an ECCP perspective, that is precisely the point. The DOJ has emphasized that compliance programs must move beyond paper design into operational reality. A drop-in audit is one of the few mechanisms that let you do that in a disciplined, documentable way.

From Vendor Oversight to Third-Party Governance

This discipline should not be limited only to classic vendors. The better view is to expand the concept across all third parties that provide, influence, host, or materially shape AI-enabled services. That includes software providers, outsourced service partners, embedded AI functionality in enterprise tools, cloud-based analytics environments, compliance technology vendors, and any external party whose systems affect business-critical decisions or regulated processes.

Risk does not care about the label on the contract. If the third party’s AI affects your organization’s screening, monitoring, investigations, decision support, or disclosures, the compliance risk is real. Your governance process must be equally real. This is why “trust but verify” is no longer just a slogan. It is a design principle for third-party oversight of AI.

The Core Elements of the Drop-In Audit

A strong drop-in audit has three features: sampling, contradiction testing, and escalation.

1. Sampling: Evidence of Operation, Not Merely Design

Sampling is where governance becomes tangible. A company requests specific artifacts tied to actual use cases and actual control operations. This may include scope documents, Statements of Applicability, system documentation, training data summaries, access controls, incident records, runtime logs, or evidence of human review. The point is simple: operational evidence is what matters.

This is where a compliance function moves from hearing about controls to seeing them in action. It is also where internal audit can add real value by testing whether the evidence supports the stated control environment.

2. Contradiction Testing: Where Real Risk Emerges

This is one of the most important and underused concepts in third-party AI oversight. Inconsistencies between claims and reality are where governance failures emerge. If a third party says its certification covers a given service, does the scope document confirm it? If it claims strong incident response, does the record back it up? If it represents strong human oversight, do the runtime traces show meaningful intervention or only theoretical review points?

Contradiction testing is powerful because it goes to credibility. It tests whether the governance narrative matches the operating reality. Under the ECCP, that is exactly the kind of inquiry prosecutors and regulators will care about. It speaks to effectiveness, honesty, and control discipline.

3. Escalation: Governance in Action

Governance without consequences is not governance. A drop-in audit must include clear escalation triggers. Missing evidence, mismatched certification scope, unexplained gaps, unresolved incidents, or inconsistent remediation should not be noted in isolation. They should trigger action.

That action may include enhanced diligence, contractual remediation, independent validation, temporary use restrictions, or deeper audit review. The important point is that the program responds. This is where the drop-in audit becomes operationalizing the ECCP. It demonstrates that the company not only identifies risk but also acts on it.

How the Drop-In Audit Maps to the ECCP

The drop-in audit aligns tightly with the DOJ’s framework for an effective compliance program. Risk assessment is addressed because the audit focuses attention on where AI-enabled third parties create actual operational and control exposure. Policies and procedures are tested because the company does not merely accept them at face value. It assesses whether the stated controls are supported by evidence. Third-party management is strengthened by making oversight continuous, risk-based, and verifiable. Testing and continuous improvement are built into the audit process, which identifies gaps, contradictions, and corrective actions. Investigation and remediation principles are reinforced by documenting, escalating, and using findings to improve the control environment.

Most importantly, the audit answers the ECCP’s central practical question: Does the program work in practice?

How the Drop-In Audit Maps to NIST AI RMF

The NIST AI Risk Management Framework provides a highly useful structure for the drop-in audit, especially through its Govern, Map, Measure, and Manage functions.

  1. Governance is reflected in defined ownership, accountability, and escalation when issues are identified.
  2. A map is reflected in understanding the third party’s actual AI use case, scope, dependencies, and business impact.
  3. The measure is reflected in the use of evidence, runtime observations, contradiction testing, and performance assessment.
  4. Management is reflected in remediation, ongoing oversight, and updates to controls based on audit findings.

In this way, the drop-in audit becomes a practical tool for taking the NIST AI RMF from concept to execution.

How the Drop-In Audit Maps to ISO/IEC 42001

ISO/IEC 42001 adds the management-system discipline that compliance programs need. Its value lies in documented scope, role clarity, control applicability, monitoring, corrective action, and continual improvement. A drop-in audit fits naturally into that structure because it tests whether those elements are visible in operation, not merely stated in documentation.

The Statement of Applicability becomes meaningful when the company verifies that the controls identified there actually correspond to the deployed service. Monitoring becomes meaningful when evidence is examined. Corrective action becomes meaningful when gaps trigger follow-up. Continual improvement becomes meaningful when findings are fed back into governance. That is why the documentation you generate should serve your board, regulators, and internal stakeholders without additional work. Producing evidence that travel is one of the most strategic benefits of this approach.

Why Every Compliance Program Needs This Now

The strategic payoff is straightforward. Strong AI governance is not a drag on innovation. It is what allows innovation to scale with trust. A drop-in audit gives compliance and internal audit a mechanism to test what matters, document their findings, and create evidence that withstands scrutiny. It moves governance from assertion to proof. It transforms third-party diligence into a repeatable, auditable process. It helps ensure that when regulators, boards, or business leaders ask how the company knows its third-party AI governance is working, there is a real answer.

Because, in the end, evidence of governance matters. Not narratives. Not slide decks. Evidence. President Reagan was right in the 1980s, and he is still right today: “Trust but verify.”

Categories
AI Today in 5

AI Today in 5: April 21, 2026, The 7 Questions You Should Ask Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. 7 questions to ask about AI and compliance. (The News Tribune)
  2. Compliance can outsource tools to AI but not judgment. (FinTech Global)
  3. Data Authenticity and Accountability for AI. (CCI)
  4. Do AI chatbots make you stupider? (BBC)
  5. ICU nurses get AI help. (HealthcareItNews)

Interested in attending Compliance Week 2026? Click here for information and Registration. Listeners to this podcast receive a 20% discount on the event. Use the Registration Code TOMFOX20

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Blog

AI Disclosures, Controls, and D&O Coverage: Closing the Governance Gap Around Artificial Intelligence

A new governance gap is emerging around artificial intelligence, and it is one that Chief Compliance Officers, compliance professionals, and boards need to confront now. It sits at the intersection of three areas that too many companies still treat separately: public disclosures, internal controls, and insurance coverage. That siloed approach is no longer sustainable.

As companies speak more confidently about their AI strategies, insurers are becoming more cautious about the risks those strategies create. That tension matters. It signals that the market is beginning to see something many organizations have not yet fully addressed: when a company’s statements about AI outpace its actual governance, the exposure is not merely operational or reputational. It can become a disclosure issue, a board oversight issue, and ultimately a proof-of-governance issue under the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP).

For the compliance professional, this is not simply an insurance story. It is a compliance integration story. The question is whether the company can align its statements about AI, the controls it has in place, and the protections it believes it has in place if something goes wrong.

The New Governance Gap

Many organizations are eager to describe AI as a source of innovation, efficiency, better decision-making, or competitive advantage. Those messages increasingly appear in earnings calls, investor decks, public filings, marketing materials, and board presentations. Yet the underlying governance structures often remain immature. That disconnect is the governance gap.

It appears when management speaks broadly about responsible AI but has not built a complete inventory of AI use cases. It appears when companies discuss oversight but cannot show testing, documentation, or monitoring. It appears that boards assume that insurance will respond to AI-related claims without understanding how new policy language may narrow coverage.

This is where D&O coverage becomes so important. It is not the center of the story, but it is a revealing signal. If insurers are revisiting policy language and introducing exclusions or limitations tied to AI-related conduct, it suggests the market sees governance risk. In other words, the insurance market is sending a message: AI-related claims are no longer hypothetical, and companies that cannot demonstrate disciplined oversight may find that risk transfer is less available than they assumed.

Why the ECCP Should Be the Primary Lens

The DOJ’s ECCP remains the most useful framework for analyzing this issue because it asks exactly the right questions.

Has the company conducted a risk assessment that accounts for emerging risks? Are policies and procedures aligned with actual business practice? Are controls working in practice? Is there proper oversight, accountability, and continuous improvement? Can the company demonstrate all of this with evidence? Those are compliance questions, but they are also the right AI governance questions.

If a company makes public statements about AI capability, oversight, or reliability, the ECCP lens requires more than aspiration. It requires substantiation. Can the company show who owns the AI risk? Can it demonstrate how models or systems are tested? Can it show escalation procedures when problems arise? Can it document how AI-related decisions are monitored, reviewed, and improved over time?

If the answer is no, then the issue is not simply that the company may have overpromised. The issue is that its compliance program may not be adequately addressing a material emerging risk. That is why CCOs should view AI as a cross-functional challenge requiring integration across legal, compliance, technology, risk, audit, investor relations, and the board.

AI Disclosure Must Be Evidence-Based

One of the most practical steps a compliance function can take is to push for an evidence-based disclosure process around AI. This means that public statements about AI should not be driven solely by enthusiasm, market pressure, or executive optimism. They should be grounded in underlying documentation. If the company says it uses AI responsibly, where is the governance framework? If it claims AI improves decision-making, what testing supports that assertion? If it says it has safeguards, where are the control descriptions, monitoring results, and escalation records?

This is not about suppressing innovation. It is about ensuring that disclosure discipline keeps pace with technological ambition. For boards, this means asking harder questions before approving or relying on public AI narratives. For compliance officers, it means helping management build the evidentiary record that turns broad statements into defensible representations.

Controls Must Catch Up to Strategy

This is where the “how-to” work begins. Compliance professionals should begin by creating a structured inventory of AI use cases across the enterprise. That inventory should identify where AI is being used, what decisions it informs, what data it relies on, who owns it, and what risks it entails.

Once that inventory exists, risk tiering should follow. Not every AI use case carries the same compliance significance. A low-risk productivity tool does not need the same oversight as a system that affects investigations, third-party due diligence, customer interactions, financial reporting, or core operational decisions.

From there, the company can design controls proportionate to risk. High-impact uses of AI should have documented governance, human review where appropriate, testing protocols, escalation triggers, and monitoring requirements. The compliance team should be able to answer a simple question: where are the controls, and how do we know they work? That is the heart of the ECCP inquiry.

Where NIST AI RMF and ISO/IEC 42001 Fit

This is also where the NIST AI Risk Management Framework and ISO/IEC 42001 become highly practical tools. NIST AI RMF helps organizations govern, map, measure, and manage AI risks. For compliance professionals, this provides a disciplined structure for identifying AI use cases, understanding impacts, assessing reliability, and managing response. It is especially useful in linking abstract AI risk to operational decision-making.

ISO/IEC 42001 brings management system discipline to AI governance. It focuses on defined roles, documented processes, control implementation, monitoring, internal review, and continual improvement. That makes it an excellent bridge between policy and execution. Together, these frameworks help operationalize the ECCP. The ECCP tells you what an effective compliance program should be able to demonstrate. NIST AI RMF helps structure the risk analysis. ISO 42001 helps embed those requirements into a repeatable governance process.

For CCOs, the practical lesson is clear: use these frameworks not as academic overlays, but as working tools to build ownership, documentation, testing, and accountability.

Insurance Is a Governance Input

Companies also need to stop treating insurance as an afterthought. D&O coverage should be considered a governance input, not merely a downstream purchase. If policy language is narrowing around AI-related claims, boards and compliance leaders need to understand what that means. What scenarios might raise disclosure-related allegations? Where is ambiguity in coverage? What assumptions has management made about protection that may no longer hold?

Compliance does not need to become an insurance specialist. But it does need to ensure that disclosure, governance, and risk transfer are aligned. If the company is making strong public claims about AI while carrying unexamined governance weaknesses and uncertain coverage, that is precisely the kind of mismatch that can trigger a crisis.

Closing the Gap Before It Becomes a Failure

The larger lesson is straightforward. AI governance is not simply about technology controls. It is about integration. It is about ensuring that what the company says, what it does, and what it can prove all line up. That is why the governance gap matters so much. It is the space where strategy outruns structure, where disclosure outruns evidence, and where confidence outruns control. For boards and compliance professionals, the task is to close that gap before it becomes a failure.

The companies that do this well will not necessarily be the ones moving the fastest. They will be the ones building documented, tested, monitored, and governed AI programs that stand up to regulatory scrutiny, investor pressure, and real-world disruption. That is not bureaucracy. That is the price of sustainable innovation.

Categories
Blog

AI as a Force Multiplier for Compliance: From Efficiency Tool to Program Effectiveness

There is a temptation in every wave of new technology to focus first on speed. How much faster can we do the work? How many hours can we save? How many tasks can we automate? Yet for the compliance professional, those are not the right first questions. The right first question is always: does this make our compliance program more effective?

That is why the recent Moody’s discussion of GenAI is so interesting when viewed through a compliance lens. The article describes AI not simply as a productivity engine, but as a tool that changes how professionals interact with information, generate insights, and support decision-making. It emphasizes workflow transformation, role-based support, auditability, data quality, and the need for governance and human oversight . For compliance officers, that is the real story. AI can indeed make work faster. But its true promise is that it can make compliance more targeted, more consistent, more responsive, and more operationally embedded.

The Department of Justice has been telling us for years, through the Evaluation of Corporate Compliance Programs (ECCP), that effectiveness is the standard. The questions are not whether a company has a policy on the shelf or a training module in the system. The questions are whether the company has access to data, whether it uses that data, whether controls are tested, whether issues are triaged appropriately, whether lessons learned are fed back into the program, and whether the program evolves as risks change. AI, properly governed, can help answer yes to each of those questions.

AI and the Compliance Program of the Future

The Moody’s paper notes that GenAI is moving from passive, knowledge-based support toward more action-oriented solutions that can assist with complex, multi-step workflows . That observation should resonate with every Chief Compliance Officer. The future is not an AI toy that drafts emails. The future is an AI-enabled compliance architecture that helps the function move from reactive to proactive.

Consider third-party due diligence. Most compliance teams still struggle with volume, fragmentation, and prioritization. Information sits in onboarding questionnaires, sanctions screens, beneficial ownership reports, payment histories, audit findings, hotline allegations, and open-source media. The challenge is not merely gathering that information. The challenge is turning it into risk-based action. AI can help synthesize disparate information sources, surface red flags, identify missing documentation, and create a more coherent risk picture. Under the ECCP, that supports a more thoughtful, risk-based approach to third-party management.

Take investigations triage. Every mature speak-up program faces the same problem: how to distinguish between the urgent, the important, and the routine. AI can help sort allegations by subject matter, geography, potential legal exposure, prior related issues, implicated business units, and urgency indicators. That does not mean AI decides guilt, materiality, or discipline. It means AI helps compliance direct scarce investigative resources where they matter most. In ECCP terms, it strengthens case handling, responsiveness, consistency, and root-cause readiness.

Now think about risk assessment. The best compliance risk assessments are dynamic, not annual rituals. AI can assist in identifying patterns across reports, controls failures, investigation outcomes, gifts and entertainment data, third-party activity, and regulatory developments. It can help compliance professionals see concentrations of risk earlier and with greater context. In a program built around continuous improvement, that is a force multiplier.

Effectiveness, Not Mere Automation

One of the most important lessons from the Moody’s article is that the value of AI lies in supporting higher-value analytical work, not just reducing routine effort. That is exactly how compliance leaders should approach deployment.

Transaction monitoring is a good example. Many organizations already use rules-based systems, but these often produce high volumes of noise. AI can support better prioritization, pattern recognition, and anomaly detection. It can help identify clusters of conduct that might otherwise remain hidden across vendors, employees, geographies, or payment channels. But the point is not simply to clear alerts faster. The point is to make the monitoring program smarter, more risk-based, and more defensible.

The same is true in training and communications. Too much compliance training remains generic, static, and detached from actual risk. AI opens the door to role-based, scenario-based, and even timing-based communications. A sales team in a high-risk market should not receive the same examples as procurement professionals dealing with third parties. A manager with hotline escalation responsibilities should not receive the same training as a new hire. AI can help tailor content, refresh scenarios, and improve accessibility. Under the ECCP, that supports effectiveness in training design, communications, and accessibility of guidance.

Speak-up and case management also stand to benefit. AI can help identify repeat issue patterns, detect retaliation indicators, cluster similar allegations, and flag unresolved themes across regions or functions. Done correctly, it can help compliance move from case closure to issue intelligence. That is where a hotline becomes not just a reporting channel but an early warning system.

Governance Is the Price of Admission

Here is where the compliance professional earns his or her stripes. The Moody’s piece is explicit that none of this works without robust governance, trustworthy data, transparency, documentation, validation, and human expertise remaining central to critical decisions . That is the bridge to both the NIST AI Risk Management Framework (NIST AI RMF) and ISO/IEC 42001.

NIST AI RMF gives compliance teams a practical way to think about governance, mapping, measurement, and management. ISO/IEC 42001 provides a management-system structure for implementing AI governance in an enterprise setting. Together with the ECCP, they provide a powerful architecture. The ECCP asks whether your compliance program works. NIST AI RMF helps define and manage AI risk. ISO/IEC 42001 helps operationalize governance and accountability.

What does that mean on the ground for  your compliance regime?

It means every AI use case in compliance should have a defined business purpose, an identified owner, approved data sources, documented limitations, escalation criteria, testing protocols, and monitoring for drift or unintended consequences. It means AI outputs should be reviewable. It means prompt logs, source provenance, and validation results should be retained where appropriate. It means employees should know when they are permitted to rely on AI and when human review is mandatory. It means there must be clear boundaries around privacy, privilege, confidentiality, bias, and record retention.

Most of all, it means compliance should resist the easy sales pitch that AI is a substitute for professional judgment. It is not. It is a force multiplier for judgment.

The Board and Senior Management Imperative

Boards and senior leaders should be asking a straightforward question: are we using AI to make compliance more effective, or are we simply using it to do old tasks faster? Those are not the same thing. A mature answer would include at least five elements. First, a risk-based inventory of compliance AI use cases. Second, governance over data quality and model performance. Third, defined human-review thresholds for consequential decisions. Fourth, ongoing monitoring and periodic validation. Fifth, a feedback loop so lessons from investigations, audits, and operations improve the system over time.

That is very much in line with both the ECCP and the Moody’s article’s emphasis on verifiable data, decision auditability, and governance at scale.

Five Lessons Learned

  1. Start with effectiveness, not efficiency. If AI only helps you do low-value tasks faster, you have not transformed compliance. Use it where it improves risk identification, triage, analysis, and action.
  2. Build around the ECCP. The DOJ already gave compliance professionals the framework. Use AI to strengthen risk assessment, third-party management, investigations, training, and continuous improvement.
  3. Govern the data before you celebrate the tool. Bad data, undocumented prompts, or unvalidated outputs will undermine trust. Governance over data provenance and output review is essential.
  4. Keep humans in the loop where it matters. AI can assist with pattern recognition, drafting, prioritization, and synthesis. It should not replace judgment on materiality, discipline, escalation, privilege, or remediation.
  5. Treat AI as part of your compliance operating model. This is not an innovation side project. It should be documented, tested, monitored, and improved like any other core compliance process.

The bottom line is this: AI offers compliance functions a genuine opportunity to become more effective, more focused, and more business relevant. But that opportunity only becomes real when it is grounded in governance, disciplined by the ECCP, and supported by frameworks like NIST AI RMF and ISO/IEC 42001. Done right, AI will not diminish the role of the compliance professional. It will elevate it.

Categories
Blog

Corporate Value(s), Corporate Risk, and the Board’s Oversight Challenge

There was a time when many executives could treat corporate values as a branding exercise, a recruiting line, or a paragraph on the company website. That time is over. Today, corporate values are operational. They shape customer loyalty, employee engagement, regulatory attention, shareholder expectations, and public trust. Most importantly for boards and compliance professionals, they shape risk.

That is the central lesson of Corporate Value(s) by Jill Fisch and Jeff Schwartz. Their insight is both practical and profound: managers should select the corporate values that maximize long-term economic value, and to do that, they need reliable information about what stakeholders actually care about. The paper does not argue that corporations should become moral philosophers. It argues for something more useful for the compliance function. Corporate values are part of the long-term value equation, and management ignores them at its peril.

Why This Matters to Compliance

For a corporate compliance audience, this is not an abstract governance debate. It is a board oversight issue. It is a cultural issue. It is an internal controls issue. And it is a warning that values misalignment can become a business crisis long before it shows up in a formal investigation or on a quarterly earnings call.

The paper is particularly strong in rejecting two simplistic views. First, it rejects the notion that companies can operate as if values do not matter. Second, it rejects the idea that companies should chase social legitimacy untethered from business reality. Instead, the authors land where sophisticated boards and chief compliance officers should land: values matter because they affect value, and management needs disciplined ways to understand that connection.

Culture as a Control

That is where compliance comes in. Too often, companies treat culture as a soft concept and values as a public relations topic. Yet every experienced compliance professional knows that culture is a control. It influences decision-making when policy manuals are silent, when incentives are misaligned, and when leaders face pressure. Corporate values, when operationalized correctly, help define that culture. They tell employees, managers, and third parties what the company stands for when the choice is not easy, the answer is not obvious, and money is on the line.

The paper notes that values-based concerns now influence a broad range of business decisions, from product design and sourcing to employment policies and public positioning. It also emphasizes that employees, customers, governments, and shareholders all communicate their values and preferences in different ways, and that management must stay attuned to those preferences, as misalignment can carry real economic consequences. That is precisely the language of risk management.

A Governance Issue for the Board

For boards, this means values cannot be siloed in human resources, investor relations, or communications. Values belong in governance. Boards need to ask not only what the company says its values are, but how those values are translated into operations, incentives, escalation, and response. If culture is a control, then values are part of the control environment.

This is also why corporate values should be viewed as a business risk issue. A values mismatch can trigger employee walkouts, consumer backlash, shareholder agitation, government retaliation, or a reputational spiral amplified through social media. The paper offers multiple examples showing how value-related decisions can carry material economic consequences. For the modern board, that means values are no longer a side conversation. They are part of enterprise risk management.

The paper offers another insight that compliance professionals should take seriously. Management often lacks perfect information about stakeholder values, and shareholders face structural impediments in communicating their views clearly. The authors argue that shareholder input can help management better understand public sentiment, reputational risk, and the tradeoffs between values and value. Whether one agrees with every detail of their governance analysis, the broader compliance lesson is straightforward: management needs listening mechanisms before a crisis hits.

Compliance as an Information System

That point should resonate deeply with compliance professionals. A mature compliance program is, at its core, an information system. It is supposed to tell management what it needs to know before misconduct metastasizes. The same is true for values-based risk. If the only time leadership learns that employees, customers, or investors believe the company is out of step is when a boycott begins, or a viral post explodes, the company’s information channels have already failed.

What Boards Should Do

  1. Boards should insist that management identify the company’s most material values-sensitive risk areas. These will vary by industry. For one company, it may be product safety. For another, environmental performance. For another, labor standards, DEI, or political engagement. The important point is that these issues should be mapped as risk categories, not simply discussed as messaging challenges.
  2. Boards should ask whether the company has credible mechanisms to hear from stakeholders before controversy becomes a crisis. The paper emphasizes that employees and customers often have clearer channels to express their values and preferences than shareholders do. A compliance-minded board should ask: Are we learning from all of them? Are we capturing concerns through speak-up systems, culture assessments, employee town halls, customer trends, market testing, and investor engagement? Or are we waiting for a public backlash to tell us what we should already know?
  3. Boards should evaluate whether management is treating corporate culture as a control. This means looking beyond tone at the top to the systems beneath it: incentives, middle-management behavior, escalation pathways, decision rights, and accountability. Values that live only in a code of conduct are decorative. Values that influence promotions, discipline, product choices, third-party oversight, and crisis response become operational.
  4. Boards should ensure that compliance has a seat at the table when values-laden business decisions are made. The compliance function should not decide corporate values. That is not its role. But it should help management test assumptions, identify blind spots, assess stakeholder reactions, and determine whether a proposed course is consistent with the company’s culture and risk appetite. In that sense, compliance serves as both translator and challenger.
  5. Boards should resist the temptation to turn every values issue into a political debate. The paper wisely cautions against viewing corporations as moral leaders first and economic institutions second. That is a sound warning. But there is an equal and opposite danger in pretending that values are irrelevant to business. They are not. The board’s job is not to moralize. It is to govern. And governance today requires management to understand how stakeholder values affect long-term value.

Steps for Chief Compliance Officers

For chief compliance officers, there are some clear, practical steps to take.

Begin by incorporating values-sensitive issues into risk assessment and culture reviews. Build a process to identify where stakeholder expectations may materially affect the company’s operations, reputation, and control environment. Make sure that speak-up and escalation systems can capture values-based concerns, not only legal violations. Work with management to develop an early-warning capability around stakeholder sentiment. Bring boards concrete reporting on culture trends, employee concerns, reputational flashpoints, and areas where the company may be drifting away from its stated values. Finally, pressure-test whether the company’s incentives, communications, and business decisions align with the culture it claims to have.

The Bottom Line

The bottom line is this: corporate values are not soft. They are not ornamental. They are not outside the compliance function’s field of vision. They are part of how companies create value, lose trust, and invite risk. The real challenge for boards and CCOs is not to choose values in the abstract. It is to build the governance and information systems that help management understand stakeholder values before a crisis hits. That is not politics. That is good governance.

Categories
Great Women in Compliance

Great Women in Compliance: Clarity, Confidence, Results: Women Over 50 at Work

In this episode, Sarah Hadden and Caveni Wong explore the unique strengths women over 50 bring to today’s workplace—and why those strengths are often overlooked.

Drawing on a career that spans consulting, sales, and ethics & compliance leadership, Caveni reflects on the power of experience, the value of judgment and relationship-building, and the kind of leadership that doesn’t rely on title or authority. They talk candidly about nonlinear career paths and what it means to reach a stage where you can choose what’s next with clarity and confidence.

Along the way, they find an unexpected metaphor in sourdough bread—patient, resilient, and built over time—much like the careers and capabilities we develop across decades.

Categories
AI Today in 5

AI Today in 5: April 14, 2026, The AI Tastes Like Twinkies Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Kara Swisher says AI: ‘It tastes like a Twinkie. ’(Fortune)
  2. AI must move beyond name matching in sanctions. (FinTechGlobal)
  3. Healthcare needs to prepare for enforcement around AI use. (HealthcareITNews)
  4. Getting AI insurance. (CCI)
  5. Balancing AI innovation with compliance for RIAs. (FinTechGlobal)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
AI Today in 5

AI Today in 5: April 13, 2026, The AI Governance Framework Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Oracle brings storytelling to the heart of compliance with AI. (Yahoo!Finance)
  2. AI is bringing compliance to BioPharma. (PharmTech)
  3. Oracle brings AI agents to financial crime and compliance. (Financial IT)
  4. Building out your AI governance framework. (Bloomberg Law)
  5. AI developments finance pros should be tracking. (MIT)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.