Categories
Blog

AI Disclosures, Controls, and D&O Coverage: Closing the Governance Gap Around Artificial Intelligence

A new governance gap is emerging around artificial intelligence, and it is one that Chief Compliance Officers, compliance professionals, and boards need to confront now. It sits at the intersection of three areas that too many companies still treat separately: public disclosures, internal controls, and insurance coverage. That siloed approach is no longer sustainable.

As companies speak more confidently about their AI strategies, insurers are becoming more cautious about the risks those strategies create. That tension matters. It signals that the market is beginning to see something many organizations have not yet fully addressed: when a company’s statements about AI outpace its actual governance, the exposure is not merely operational or reputational. It can become a disclosure issue, a board oversight issue, and ultimately a proof-of-governance issue under the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP).

For the compliance professional, this is not simply an insurance story. It is a compliance integration story. The question is whether the company can align its statements about AI, the controls it has in place, and the protections it believes it has in place if something goes wrong.

The New Governance Gap

Many organizations are eager to describe AI as a source of innovation, efficiency, better decision-making, or competitive advantage. Those messages increasingly appear in earnings calls, investor decks, public filings, marketing materials, and board presentations. Yet the underlying governance structures often remain immature. That disconnect is the governance gap.

It appears when management speaks broadly about responsible AI but has not built a complete inventory of AI use cases. It appears when companies discuss oversight but cannot show testing, documentation, or monitoring. It appears that boards assume that insurance will respond to AI-related claims without understanding how new policy language may narrow coverage.

This is where D&O coverage becomes so important. It is not the center of the story, but it is a revealing signal. If insurers are revisiting policy language and introducing exclusions or limitations tied to AI-related conduct, it suggests the market sees governance risk. In other words, the insurance market is sending a message: AI-related claims are no longer hypothetical, and companies that cannot demonstrate disciplined oversight may find that risk transfer is less available than they assumed.

Why the ECCP Should Be the Primary Lens

The DOJ’s ECCP remains the most useful framework for analyzing this issue because it asks exactly the right questions.

Has the company conducted a risk assessment that accounts for emerging risks? Are policies and procedures aligned with actual business practice? Are controls working in practice? Is there proper oversight, accountability, and continuous improvement? Can the company demonstrate all of this with evidence? Those are compliance questions, but they are also the right AI governance questions.

If a company makes public statements about AI capability, oversight, or reliability, the ECCP lens requires more than aspiration. It requires substantiation. Can the company show who owns the AI risk? Can it demonstrate how models or systems are tested? Can it show escalation procedures when problems arise? Can it document how AI-related decisions are monitored, reviewed, and improved over time?

If the answer is no, then the issue is not simply that the company may have overpromised. The issue is that its compliance program may not be adequately addressing a material emerging risk. That is why CCOs should view AI as a cross-functional challenge requiring integration across legal, compliance, technology, risk, audit, investor relations, and the board.

AI Disclosure Must Be Evidence-Based

One of the most practical steps a compliance function can take is to push for an evidence-based disclosure process around AI. This means that public statements about AI should not be driven solely by enthusiasm, market pressure, or executive optimism. They should be grounded in underlying documentation. If the company says it uses AI responsibly, where is the governance framework? If it claims AI improves decision-making, what testing supports that assertion? If it says it has safeguards, where are the control descriptions, monitoring results, and escalation records?

This is not about suppressing innovation. It is about ensuring that disclosure discipline keeps pace with technological ambition. For boards, this means asking harder questions before approving or relying on public AI narratives. For compliance officers, it means helping management build the evidentiary record that turns broad statements into defensible representations.

Controls Must Catch Up to Strategy

This is where the “how-to” work begins. Compliance professionals should begin by creating a structured inventory of AI use cases across the enterprise. That inventory should identify where AI is being used, what decisions it informs, what data it relies on, who owns it, and what risks it entails.

Once that inventory exists, risk tiering should follow. Not every AI use case carries the same compliance significance. A low-risk productivity tool does not need the same oversight as a system that affects investigations, third-party due diligence, customer interactions, financial reporting, or core operational decisions.

From there, the company can design controls proportionate to risk. High-impact uses of AI should have documented governance, human review where appropriate, testing protocols, escalation triggers, and monitoring requirements. The compliance team should be able to answer a simple question: where are the controls, and how do we know they work? That is the heart of the ECCP inquiry.

Where NIST AI RMF and ISO/IEC 42001 Fit

This is also where the NIST AI Risk Management Framework and ISO/IEC 42001 become highly practical tools. NIST AI RMF helps organizations govern, map, measure, and manage AI risks. For compliance professionals, this provides a disciplined structure for identifying AI use cases, understanding impacts, assessing reliability, and managing response. It is especially useful in linking abstract AI risk to operational decision-making.

ISO/IEC 42001 brings management system discipline to AI governance. It focuses on defined roles, documented processes, control implementation, monitoring, internal review, and continual improvement. That makes it an excellent bridge between policy and execution. Together, these frameworks help operationalize the ECCP. The ECCP tells you what an effective compliance program should be able to demonstrate. NIST AI RMF helps structure the risk analysis. ISO 42001 helps embed those requirements into a repeatable governance process.

For CCOs, the practical lesson is clear: use these frameworks not as academic overlays, but as working tools to build ownership, documentation, testing, and accountability.

Insurance Is a Governance Input

Companies also need to stop treating insurance as an afterthought. D&O coverage should be considered a governance input, not merely a downstream purchase. If policy language is narrowing around AI-related claims, boards and compliance leaders need to understand what that means. What scenarios might raise disclosure-related allegations? Where is ambiguity in coverage? What assumptions has management made about protection that may no longer hold?

Compliance does not need to become an insurance specialist. But it does need to ensure that disclosure, governance, and risk transfer are aligned. If the company is making strong public claims about AI while carrying unexamined governance weaknesses and uncertain coverage, that is precisely the kind of mismatch that can trigger a crisis.

Closing the Gap Before It Becomes a Failure

The larger lesson is straightforward. AI governance is not simply about technology controls. It is about integration. It is about ensuring that what the company says, what it does, and what it can prove all line up. That is why the governance gap matters so much. It is the space where strategy outruns structure, where disclosure outruns evidence, and where confidence outruns control. For boards and compliance professionals, the task is to close that gap before it becomes a failure.

The companies that do this well will not necessarily be the ones moving the fastest. They will be the ones building documented, tested, monitored, and governed AI programs that stand up to regulatory scrutiny, investor pressure, and real-world disruption. That is not bureaucracy. That is the price of sustainable innovation.

Categories
Blog

AI as a Force Multiplier for Compliance: From Efficiency Tool to Program Effectiveness

There is a temptation in every wave of new technology to focus first on speed. How much faster can we do the work? How many hours can we save? How many tasks can we automate? Yet for the compliance professional, those are not the right first questions. The right first question is always: does this make our compliance program more effective?

That is why the recent Moody’s discussion of GenAI is so interesting when viewed through a compliance lens. The article describes AI not simply as a productivity engine, but as a tool that changes how professionals interact with information, generate insights, and support decision-making. It emphasizes workflow transformation, role-based support, auditability, data quality, and the need for governance and human oversight . For compliance officers, that is the real story. AI can indeed make work faster. But its true promise is that it can make compliance more targeted, more consistent, more responsive, and more operationally embedded.

The Department of Justice has been telling us for years, through the Evaluation of Corporate Compliance Programs (ECCP), that effectiveness is the standard. The questions are not whether a company has a policy on the shelf or a training module in the system. The questions are whether the company has access to data, whether it uses that data, whether controls are tested, whether issues are triaged appropriately, whether lessons learned are fed back into the program, and whether the program evolves as risks change. AI, properly governed, can help answer yes to each of those questions.

AI and the Compliance Program of the Future

The Moody’s paper notes that GenAI is moving from passive, knowledge-based support toward more action-oriented solutions that can assist with complex, multi-step workflows . That observation should resonate with every Chief Compliance Officer. The future is not an AI toy that drafts emails. The future is an AI-enabled compliance architecture that helps the function move from reactive to proactive.

Consider third-party due diligence. Most compliance teams still struggle with volume, fragmentation, and prioritization. Information sits in onboarding questionnaires, sanctions screens, beneficial ownership reports, payment histories, audit findings, hotline allegations, and open-source media. The challenge is not merely gathering that information. The challenge is turning it into risk-based action. AI can help synthesize disparate information sources, surface red flags, identify missing documentation, and create a more coherent risk picture. Under the ECCP, that supports a more thoughtful, risk-based approach to third-party management.

Take investigations triage. Every mature speak-up program faces the same problem: how to distinguish between the urgent, the important, and the routine. AI can help sort allegations by subject matter, geography, potential legal exposure, prior related issues, implicated business units, and urgency indicators. That does not mean AI decides guilt, materiality, or discipline. It means AI helps compliance direct scarce investigative resources where they matter most. In ECCP terms, it strengthens case handling, responsiveness, consistency, and root-cause readiness.

Now think about risk assessment. The best compliance risk assessments are dynamic, not annual rituals. AI can assist in identifying patterns across reports, controls failures, investigation outcomes, gifts and entertainment data, third-party activity, and regulatory developments. It can help compliance professionals see concentrations of risk earlier and with greater context. In a program built around continuous improvement, that is a force multiplier.

Effectiveness, Not Mere Automation

One of the most important lessons from the Moody’s article is that the value of AI lies in supporting higher-value analytical work, not just reducing routine effort. That is exactly how compliance leaders should approach deployment.

Transaction monitoring is a good example. Many organizations already use rules-based systems, but these often produce high volumes of noise. AI can support better prioritization, pattern recognition, and anomaly detection. It can help identify clusters of conduct that might otherwise remain hidden across vendors, employees, geographies, or payment channels. But the point is not simply to clear alerts faster. The point is to make the monitoring program smarter, more risk-based, and more defensible.

The same is true in training and communications. Too much compliance training remains generic, static, and detached from actual risk. AI opens the door to role-based, scenario-based, and even timing-based communications. A sales team in a high-risk market should not receive the same examples as procurement professionals dealing with third parties. A manager with hotline escalation responsibilities should not receive the same training as a new hire. AI can help tailor content, refresh scenarios, and improve accessibility. Under the ECCP, that supports effectiveness in training design, communications, and accessibility of guidance.

Speak-up and case management also stand to benefit. AI can help identify repeat issue patterns, detect retaliation indicators, cluster similar allegations, and flag unresolved themes across regions or functions. Done correctly, it can help compliance move from case closure to issue intelligence. That is where a hotline becomes not just a reporting channel but an early warning system.

Governance Is the Price of Admission

Here is where the compliance professional earns his or her stripes. The Moody’s piece is explicit that none of this works without robust governance, trustworthy data, transparency, documentation, validation, and human expertise remaining central to critical decisions . That is the bridge to both the NIST AI Risk Management Framework (NIST AI RMF) and ISO/IEC 42001.

NIST AI RMF gives compliance teams a practical way to think about governance, mapping, measurement, and management. ISO/IEC 42001 provides a management-system structure for implementing AI governance in an enterprise setting. Together with the ECCP, they provide a powerful architecture. The ECCP asks whether your compliance program works. NIST AI RMF helps define and manage AI risk. ISO/IEC 42001 helps operationalize governance and accountability.

What does that mean on the ground for  your compliance regime?

It means every AI use case in compliance should have a defined business purpose, an identified owner, approved data sources, documented limitations, escalation criteria, testing protocols, and monitoring for drift or unintended consequences. It means AI outputs should be reviewable. It means prompt logs, source provenance, and validation results should be retained where appropriate. It means employees should know when they are permitted to rely on AI and when human review is mandatory. It means there must be clear boundaries around privacy, privilege, confidentiality, bias, and record retention.

Most of all, it means compliance should resist the easy sales pitch that AI is a substitute for professional judgment. It is not. It is a force multiplier for judgment.

The Board and Senior Management Imperative

Boards and senior leaders should be asking a straightforward question: are we using AI to make compliance more effective, or are we simply using it to do old tasks faster? Those are not the same thing. A mature answer would include at least five elements. First, a risk-based inventory of compliance AI use cases. Second, governance over data quality and model performance. Third, defined human-review thresholds for consequential decisions. Fourth, ongoing monitoring and periodic validation. Fifth, a feedback loop so lessons from investigations, audits, and operations improve the system over time.

That is very much in line with both the ECCP and the Moody’s article’s emphasis on verifiable data, decision auditability, and governance at scale.

Five Lessons Learned

  1. Start with effectiveness, not efficiency. If AI only helps you do low-value tasks faster, you have not transformed compliance. Use it where it improves risk identification, triage, analysis, and action.
  2. Build around the ECCP. The DOJ already gave compliance professionals the framework. Use AI to strengthen risk assessment, third-party management, investigations, training, and continuous improvement.
  3. Govern the data before you celebrate the tool. Bad data, undocumented prompts, or unvalidated outputs will undermine trust. Governance over data provenance and output review is essential.
  4. Keep humans in the loop where it matters. AI can assist with pattern recognition, drafting, prioritization, and synthesis. It should not replace judgment on materiality, discipline, escalation, privilege, or remediation.
  5. Treat AI as part of your compliance operating model. This is not an innovation side project. It should be documented, tested, monitored, and improved like any other core compliance process.

The bottom line is this: AI offers compliance functions a genuine opportunity to become more effective, more focused, and more business relevant. But that opportunity only becomes real when it is grounded in governance, disciplined by the ECCP, and supported by frameworks like NIST AI RMF and ISO/IEC 42001. Done right, AI will not diminish the role of the compliance professional. It will elevate it.

Categories
Blog

Culture, Speak-Up, and Human Judgment: The Human Side of AI Governance

Artificial intelligence may be built on data, models, and code, but governance ultimately rests on people. For boards and Chief Compliance Officers, one of the most important questions is not only whether the organization has responsibly approved AI tools, but also whether employees are prepared to challenge them, report concerns, and apply human judgment when something does not look right. In many organizations, the earliest warning system for AI failure is not a dashboard. It is the workforce.

Over the course of this series, I have explored four critical governance challenges in AI: board oversight and accountability, strategy outrunning governance, data governance and privacy, and ongoing monitoring. This final blog post turns to the fifth and most underappreciated challenge of all: culture, speak-up, and human judgment.

Underappreciated because organizations often begin AI governance with structure in mind. They build committees, draft policies, classify risks, and establish approval gates. All of that is necessary. But structure alone is not sufficient. If the human beings closest to the work do not understand their role in AI governance, do not feel empowered to raise concerns, or begin to defer too readily to machine-generated outputs. The governance framework will be weaker than it appears on paper.

This is the point many companies miss. AI governance is not only about the technology. It is about whether the organization’s culture supports the responsible use of technology.

Employees Will See AI Failures First

In many companies, the first person to notice an AI problem will not be a board member, a Chief Executive Officer, or even a member of the governance committee. It will be an employee interacting with the tool in daily operations. It may be the customer service representative who sees the system generating inaccurate responses. It may be the HR professional who notices troubling patterns from an AI-supported screening tool. It may be the sales employee who sees a generative tool overstating product claims. It may be the finance professional who questions an automated summary that does not match underlying records. It may be the compliance analyst who sees a tool being used for an unapproved purpose.

That matters because early visibility is one of the most valuable protections a company can have. But visibility only becomes a control if employees know what to do with what they see. That is why culture is a governance issue. A workforce may spot the problem, but if employees do not understand that AI-related concerns are reportable, are unsure where to raise them, or believe management will ignore them, the warning system fails.

For boards and CCOs, that means AI governance cannot stop at policy creation. It must extend into behavior, reporting norms, and organizational trust.

Speak-Up Culture Is an AI Governance Control

Compliance professionals have long known that a speak-up culture is a control. It is often the first way a company learns of misconduct, process breakdowns, weak supervision, retaliation, harassment, fraud, or control evasion. The same principle now applies with equal force to AI.

Employees may observe biased outputs, inaccurate recommendations, privacy concerns, unexplained model behavior, misuse of tools, inappropriate reliance on machine-generated content, or efforts to bypass required human review. If they do not report those concerns, management may have no timely way to know what is happening.

This is where the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) remains highly instructive. The ECCP places substantial emphasis on whether employees are comfortable raising concerns, whether the company investigates them appropriately, and whether retaliation is prohibited in practice. Those same questions should now be asked in the context of AI. Does the company’s reporting framework explicitly include AI-related concerns? Are managers trained to recognize and escalate those concerns? Are reports investigated with the same seriousness as other compliance issues? Are employees protected if they raise uncomfortable questions about a tool the business wants to use?

If the answer is no, the company may have AI procedures, but does not yet have embedded AI governance in its culture.

Human Judgment Cannot Be Optional

One of the most significant risks in AI governance is not simply that a model will be wrong. It is that people will stop questioning it. AI systems can produce outputs quickly, fluently, and with apparent confidence. That creates a powerful temptation for users to over-trust the tool. When a system sounds polished, appears efficient, and reduces workload, people may assume that its conclusions deserve deference. This is precisely where governance needs the corrective force of human judgment.

Human judgment cannot be treated as a ceremonial step or a paper requirement. It must be meaningful. That means the people reviewing AI outputs must have the authority, time, training, and confidence to challenge those outputs when needed. A human review requirement that exists only on paper is not much of a safeguard. If reviewers are overloaded, insufficiently trained, or culturally discouraged from slowing the process, the control may be largely illusory.

Boards should care about this because one of the easiest mistakes management can make is to describe human oversight in governance documents without testing whether it is functioning in practice. CCOs should care because this is a classic compliance problem. A control may be designed elegantly but fail in daily operations because the supporting culture is too weak to sustain it.

Training Must Change with AI

A company cannot expect good judgment around AI if it has not trained people on what good judgment looks like. That means AI training should go beyond technical usage instructions. Employees need to understand what risks may arise, what concerns are reportable, what approved use looks like, what prohibited use looks like, and why human challenge matters. Managers need additional training because they are often the first informal escalation point when an employee raises a concern. If managers dismiss AI concerns as overreactions, inconveniences, or resistance to innovation, the speak-up system will quickly lose credibility.

Training should also be role-based. The risks faced by a customer-facing team may differ from those faced by teams in HR, legal, procurement, marketing, finance, or internal audit. A generic AI training module may create awareness, but it will not create the operational judgment needed in high-risk areas.

This is where the NIST AI Risk Management Framework provides practical value. NIST’s emphasis on governance is not limited to formal structures. It contemplates culture, accountability, and the need for organizations to support informed decision-making across the enterprise. ISO/IEC 42001 similarly reinforces the importance of organizational competence, awareness, and defined responsibilities. Both frameworks point to a critical truth: responsible AI use depends not only on controls over the technology, but also on the capabilities of the people who use and oversee it.

Managers Matter More Than Companies Often Realize

If culture is the operating environment of governance, managers are often its most important local translators. An employee may not begin by filing a formal report. More often, an employee may raise a concern informally with a supervisor or colleague. “This output does not seem right.” “I do not think we should be using it this way.” “This seems to be pulling in sensitive information.” “This recommendation may be biased.” “The human review is not really happening anymore.”

The manager’s response in that moment matters enormously. Does the manager take the concern seriously? Does the manager know it should be escalated? Does the manager see it as a governance issue or as resistance to efficiency? Does the manager understand the difference between a minor usability complaint and a potentially significant compliance concern?

This is why boards and CCOs should not think about speak-up solely in hotline terms. AI governance depends on the broader management culture. If supervisors are not equipped to receive and escalate AI concerns appropriately, many issues will die in the middle of the organization before they ever reach a formal channel.

Anti-Retaliation Must Be Real in the AI Context

There is another dimension that cannot be overlooked: the risk of retaliation. In some organizations, employees may hesitate to raise AI concerns because they fear being labeled anti-innovation, obstructionist, or not commercially minded. That creates a subtle but serious governance risk. If the corporate atmosphere celebrates rapid AI adoption without equally celebrating responsible challenge, then employees may conclude that silence is safer than candor.

This is why anti-retaliation messaging must be explicit in the AI context. The company should make clear that raising concerns about inaccurate outputs, misuse, privacy risks, unfairness, or control breakdowns is part of responsible business conduct. It is not a failure to embrace innovation. It is a contribution to the effective governance of innovation.

The CCO should ensure that AI-related concerns are incorporated into existing anti-retaliation frameworks, investigations protocols, and communications. Boards should ask whether employee sentiment data, hotline trends, and internal investigations provide any signal that people are reluctant to question AI initiatives. If the organization is moving aggressively on AI, it should be equally serious about protecting those who raise governance concerns about it.

Documentation and Escalation Still Matter

As with every other aspect of AI governance, culture and judgment must be integrated into the process. A company should document how AI-related concerns can be reported, how they are triaged, who reviews them, what escalation triggers apply, and how resolutions are tracked. Concerns about AI should not be dismissed as vague general complaints. They should be reviewable and analyzable over time.

This is essential not only for accountability but for learning. Patterns in employee concerns may reveal weaknesses in training, design, vendor management, access controls, or oversight. A single report may be an isolated event. Repeated concerns within a single function may point to a systemic governance problem. That is why speak-up is not just about receiving reports. It is about turning those reports into organizational intelligence.

The ECCP again offers a useful framework. It asks whether investigations are timely, whether root causes are examined, and whether lessons learned are fed back into the compliance program. AI governance should work the same way. A reported concern should not end with a narrow answer to the immediate complaint. It should prompt management to ask what the issue reveals about the broader governance environment.

Boards Must Model the Right Tone

This final point may be the most important. Culture is shaped by what leadership rewards, tolerates, and asks about. If the board only asks about AI efficiency, adoption, and speed, management will take the signal. If the board asks whether employees are raising concerns, whether human oversight is meaningful, whether managers are trained, and whether retaliation protections are working, management will take that signal as well.

For CCOs, this is a vital opportunity. The compliance function can help boards understand that governance is not only about structure and controls, but also about whether the organization has preserved the human capacity to question, escalate, and correct. In the AI context, that may be the most important governance capability of all.

Because in the end, even the most advanced system will not govern itself. An enterprise must govern it. That requires culture. It requires trust. It requires the courage to speak up. And it requires strong human judgment to look at an impressive output and still ask, “Is this right?”

The Human Side of Governance Is the Decisive Side

This final article brings the series back to a simple truth. AI governance is not only about what the company builds. It is about how the company behaves.

Boards may establish oversight. Management may create structures. Compliance may build controls. But if employees are not prepared to report concerns or exercise judgment, the organization will remain vulnerable. A strong AI governance program does not merely control the system. It empowers the people around the system to challenge it responsibly.

That is the human side of governance, and in many ways it is the decisive side. 

Categories
Blog

Ongoing Monitoring: Why AI Governance Begins After Launch

In this blog post, we turn to the fourth major governance challenge in AI: ongoing monitoring. This is one of the most persistent weaknesses in AI governance. Organizations may build an intake process. They may create an approval committee. They may conduct risk reviews, privacy assessments, and validation testing before launch. All of that is important. But it is not enough.

AI risk does not freeze at the moment of approval. It changes over time. Use cases evolve. Employees adapt tools in unexpected ways. Vendors modify models. Controls weaken in practice. Regulatory expectations shift. What looked reasonable at launch may become inadequate six weeks later.

That is why ongoing monitoring is not an optional enhancement to AI governance. It is a core governance requirement. For boards and CCOs, the central question is not simply whether the company approved AI responsibly. It is whether the company has the discipline to govern it continuously once it is in the wild.

Approval Is Not Governance

One of the great temptations in AI governance is to confuse approval with control. A business unit proposes a use case, a committee reviews it, guardrails are listed, and the tool goes live. At that point, many organizations behave as though the governance work is largely complete. It is not.

Approval is a moment. Governance is a process. The problem is that companies often put their best people, clearest thinking, and highest scrutiny into the approval stage, then shift immediately into operational mode without building the same discipline around post-launch oversight. That leaves management blind to how the system actually performs under real-world conditions.

The Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) is especially instructive here. The ECCP does not ask merely whether a company has policies on paper. It asks whether the program works in practice, whether controls are tested, whether issues are investigated, and whether lessons learned are incorporated back into the compliance framework. AI governance should be viewed through the same lens. The question is not whether a control was described at launch. The question is whether that control continues to function and whether management would know if it stopped.

Why AI Risks Change After Launch

Post-deployment risk in AI does not arise because management failed to care on Implementation Day. It arises because AI systems operate in dynamic environments. A model may begin to drift as conditions change. A tool approved for one limited purpose may gradually be used for broader or higher-risk decisions. Employees may find workarounds that bypass the intended controls. Human reviewers may begin by scrutinizing outputs closely but, over time, may become overconfident, overloaded, or simply too reliant on the system. Vendors may update underlying functionality without the company fully appreciating the consequences. New regulations or regulatory interpretations may alter the risk landscape. Inputs may change. Outputs may become less reliable. Bias may surface in ways not identified in initial testing.

In other words, AI governance risk is not static. It is operational. That is why boards and CCOs must resist the notion that initial approval is the hardest part. In many respects, ongoing monitoring is harder because it requires sustained attention, clear metrics, escalation discipline, and the willingness to revisit prior assumptions.

The Governance Question

After implementation, the governance question changes. It is no longer simply, “Was this use case approved?” It becomes, “Is the use case still operating as expected, within risk tolerance, and under effective control?” That sounds simple, but it requires a much more mature oversight model than many companies currently have. It requires management to define what should be monitored, how frequently, by whom, and what changes or anomalies trigger escalation. It requires a reporting structure that does not simply celebrate adoption or efficiency gains, but surfaces incidents, deviations, near misses, and control fatigue.

For the board, the challenge is to insist on post-launch visibility. Board reporting on AI should not end with inventories and implementation updates. It should include information about ongoing performance, exception trends, complaints, incidents, validation results, vendor changes, policy breaches, and remediation efforts. A board that hears only that AI adoption is accelerating may not hear that AI governance is working.

For the CCO, the challenge is even more immediate. Compliance must ask whether the organization is gathering evidence that controls continue to function in practice. If it is not, then the governance program is still immature, no matter how polished its approval process may appear.

Monitoring What Matters

It all begins by identifying the right things to monitor. This cannot be a generic exercise. Monitoring should be tied to the specific use case, its risk classification, and its control environment. But there are some recurring categories that boards and CCOs should expect to see.

  1. Performance should be monitored. Is the tool still delivering outputs that are accurate, reliable, and appropriate for the intended purpose? Have error rates changed? Are there signs of drift or degraded quality?
  2. Control effectiveness should be monitored. Are human review requirements actually being followed? Are approval restrictions, access controls, or usage limitations still operating as designed? Is there evidence that employees are bypassing or weakening controls?
  3. Incidents and complaints should be monitored. Has the tool produced problematic results? Have customers, employees, or managers raised concerns? Have there been internal reports about bias, inaccuracy, misuse, or confidentiality risks?
  4. Changes in scope should be monitored. Is the tool still being used for the original purpose, or has it drifted into new contexts? Scope creep is one of the oldest compliance problems in business, and AI is no exception.
  5. External change should be monitored. Has a vendor updated the model? Have relevant laws, guidance, or industry expectations changed? Has a new regulatory concern emerged that requires reevaluation?

This is where the NIST AI Risk Management Framework is especially useful. NIST emphasizes that organizations must govern, measure, and manage AI risk over time, not simply identify it once. ISO/IEC 42001 reaches the same conclusion from a management systems perspective by requiring continual improvement, internal review, and adaptive controls. Both frameworks point to the same truth: effective AI governance is iterative, not episodic.

The CCO’s Role in Governance

For compliance professionals, ongoing monitoring is where the AI governance conversation becomes most familiar. This is where the CCO brings real institutional value. Compliance understands that controls weaken over time. Training decays. Workarounds emerge. Policies lose operational traction. Reporting channels capture issues others do not see. Root cause analysis matters. Corrective action must be tracked to closure. These are not new lessons. They are the daily work of compliance. AI gives them a new domain.

The CCO should insist that AI use cases have documented post-launch monitoring plans. These should identify the responsible owner, the metrics to be reviewed, the review frequency, the escalation triggers, and the process for documenting findings and remediation. High-risk use cases should not be left to passive observation. They should be actively governed.

The CCO should also ensure that AI monitoring is connected to the broader compliance ecosystem. Employee concerns raised through speak-up channels may reveal issues with the model. Internal investigations may expose misuse. Third-party due diligence may uncover changes to vendors. Training gaps may explain repeated incidents. AI governance should not be isolated from these functions. It should be integrated with them.

This is also where the CCO can most effectively help the board. Rather than presenting AI as a series of isolated technical matters, the CCO can frame post-launch governance in familiar compliance terms: monitoring, testing, escalation, remediation, and lessons learned.

Board Practice: Ask for More Than Adoption Metrics

One of the most important disciplines boards can develop is to stop mistaking usage information for governance information.

Management may report that AI adoption is growing, that productivity gains are material, or that pilot programs are expanding. Those data points may be relevant, but they are not a form of governance assurance. A board should want to know whether controls are operating, whether incidents are increasing, whether certain business units generate more exceptions, whether human review remains meaningful, and whether management has paused or modified any use cases based on real-world experience.

This is where board oversight becomes genuinely valuable. When the board asks for evidence of ongoing monitoring, it changes management behavior. It signals that AI success will not be measured solely by speed or efficiency, but also by discipline and resilience.

Boards should also ensure that high-risk use cases receive enhanced visibility. Not every AI tool merits the same level of board attention. But where AI affects regulated interactions, employment decisions, sensitive data, financial reporting, significant customer outcomes, or reputationally sensitive functions, ongoing board-level reporting should be expected.

Escalation and Remediation Must Be Built In

Monitoring matters only if it leads to action. There must be clear escalation and remediation protocols. When a material issue emerges, who gets notified? Can the use case be paused? Who determines whether the problem is technical, operational, legal, or cultural? How are facts gathered? How are corrective actions assigned? When is the board informed? How is the lesson fed back into policy, training, vendor management, or approval standards?

These processes should not be improvised. They should be documented. The organization should know in advance which incidents require escalation, which temporary controls may be imposed, and how remediation is tracked.

This is another place where the ECCP provides a useful governance model. DOJ expects companies not only to identify misconduct but also to investigate it, understand its root causes, and implement improvements that reduce the risk of recurrence. AI governance should work the same way. If a model fails or a control weakens, management should not merely fix the immediate problem. It should ask what the failure reveals about the program itself.

Documentation Is the Proof

As with every other element of effective governance, documentation is what turns intention into evidence. Post-launch AI governance should generate records that demonstrate monitoring occurred, issues were surfaced, escalations were handled, and remediation was completed. That may include performance reviews, validation updates, incident logs, committee minutes, complaint summaries, control testing records, vendor change notices, and corrective action trackers.

Without such documentation, management may believe it is effectively monitoring AI, but it will struggle to prove it to internal audit, regulators, or the board. More importantly, it will struggle to learn from experience in a disciplined way. A company that documents ongoing monitoring creates institutional memory. It can compare use cases, detect patterns, and refine its oversight model over time. That is how governance matures.

AI Governance Starts After Launch

The hardest truth in AI governance may be this: launching the tool is often the easiest part. The real challenge begins afterward. That is when optimism meets operational reality. That is when human reviewers become tired. That is when vendors update products. That is when regulators begin asking harder questions. That is when small problems become visible, or invisible, depending on whether the company has built a monitoring system capable of finding them.

For boards and CCOs, this is where governance earns its name. If the organization can monitor, escalate, remediate, and improve, then AI oversight has substance. If it cannot, then the company has not really governed AI at all. It has only been approved.

In the next and final blog post in this series, I will turn to the fifth governance challenge: culture, speak-up, and human judgment, because in many organizations, the first people to see an AI problem will not be the board, the CCO, or the governance committee. It will be the employee closest to the work.

Categories
Blog

AI Compliance as a Competitive Advantage: Turning Governance Into ROI

In too many organizations, “AI compliance” is treated like a speed bump. Something to route around, manage after launch, or outsource to a vendor deck and a policy that nobody reads. That mindset is not only outdated but also expensive. In 2026, mature AI governance is becoming a commercial differentiator because customers, regulators, employees, and business partners increasingly ask the same question: Can you prove your system is trustworthy?

The most underappreciated truth is that AI risk is not “an AI team problem.” It is a business-process problem, expressed through data, decisions, third parties, and change control. The Department of Justice Evaluation of Corporate Compliance Programs (ECCP) has never been about perfect paperwork; it has been about whether a program is designed, implemented, resourced, tested, and improved. If you can translate that posture into AI, you can convert “compliance cost” into “credibility capital.”

A cautionary backdrop shows why. The EEOC’s 2023 settlement with iTutorGroup serves as a cautionary tale: automated hiring screening that disadvantages older workers can lead to legal exposure, remediation costs, and reputational damage. The details matter less than the pattern; when algorithmic decisions are not governed, the business eventually pays the bill. The compliance professional should see the pivot clearly; governance is the mechanism that lets you move fast without becoming reckless.

From a build-from-scratch, low-to-medium maturity posture, the win is not sophistication. The win is repeatability. If you build an AI governance framework aligned to NIST AI RMF (govern, map, measure, manage), structured through ISO/IEC 42001’s management-system discipline, and cognizant of EU AI Act risk tiering, you get something the business loves: a predictable path from idea to deployment. Today, I will explore five ways mature AI compliance can become a competitive advantage, each with a practical view of how a compliance-focused GenAI assistant can support business processes.

1) Sales and Customer Trust

Trust is a sales feature now, even when marketing refuses to call it that. Customers increasingly ask about data use, model behavior, security controls, and human oversight, and they are doing it in procurement questionnaires and contract negotiations. A mature governance framework lets you answer quickly, consistently, and with evidence, thereby shortening sales cycles and reducing late-stage deal friction. A compliance GenAI can support this by drafting standardized responses from approved trust artifacts such as policies, model cards, DPIAs, and audit summaries; flagging gaps, and routing exceptions to Legal and Compliance before the business overpromises.

For compliance professionals, this lesson is even more stark, as the ‘customers’ of a corporate compliance program are your employees. Some key KPIs you can track are average time to complete AI security and compliance questionnaires; percentage of deals requiring AI-related contractual concessions; number of customer-facing AI disclosures issued with approved templates; and percentage of AI systems with current model documentation and ownership attestations.

2) Regulatory Credibility

Regulators are not impressed by ambition; controls persuade them. NIST AI RMF provides a common language to demonstrate that you mapped use cases, measured risks, and managed them over time, while ISO/IEC 42001 imposes discipline on accountability, documentation, and continual improvement. The EU AI Act’s risk-based approach adds an organizing principle: classify systems, apply controls proportionate to risk, and prove that you did it. A compliance GenAI can help by maintaining a living inventory, prompting owners to complete quarterly attestations, drafting control narratives aligned with the frameworks, and assembling regulator-ready “evidence packs” that demonstrate governance in operation rather than on paper.

For compliance professionals, this lesson is about your gap analysis. You have not aligned your current internal controls with GenAI, governance, or other controls. You should do so. Some key KPIs you can track are percentage of AI systems risk-tiered and documented; time to produce an evidence pack for a high-impact system; number of material control exceptions and time-to-remediation; and frequency of risk reviews for high-impact systems.

3) Faster Product Approvals and Safer Deployment

Speed comes from clarity, not from cutting corners. When decision rights, review thresholds, and required artifacts are defined up front, product teams stop guessing what Compliance will require at the end. That is the management-system advantage: ISO/IEC 42001 treats AI governance like a repeatable operational process with gates, owners, and records, rather than a series of one-off debates. A compliance GenAI can support the workflow by pre-screening new use-case intake forms, recommending the correct risk tier under EU AI Act concepts, suggesting required testing (bias, privacy, safety), and generating the first draft of a launch checklist that the product team can execute.

For compliance professionals, this lesson is that you must run compliance at the speed of your business operations. Some key KPIs you can track are: cycle time from AI intake to approval; percent of launches that pass on first review; number of post-launch “surprise” issues tied to missing pre-launch controls; and percentage of models with human-in-the-loop controls when required.

4) Talent, Recruiting, and Internal Confidence

Top performers do not want to work in a company that treats AI like a toy and compliance like a nuisance. Mature governance creates psychological safety inside the organization: employees know what is permitted, what is prohibited, and how to raise concerns. It also improves recruiting because candidates, especially in technical roles, ask about responsible AI practices, data governance, and ethical guardrails. A compliance GenAI can support internal confidence by serving as the first-line “policy concierge,” answering questions with approved guidance, directing employees to the correct procedures, and logging common questions so Compliance can improve training and communications.

For compliance professionals, this fits squarely within the DOJ mandate for compliance to lead efforts in institutional justice and fairness. Some key KPIs you can track include training completion and comprehension metrics for AI use; the number of AI-related helpline inquiries and their resolution times; employee survey results on comfort raising AI concerns; and the percentage of AI use cases with documented business-owner accountability.

5) Lower Cost of Incidents and More Resilient Operations

AI incidents are rarely just “bad outputs.” They are process failures: poor data lineage, uncontrolled model changes, vendor opacity, missing logs, weak access controls, or no escalation path when harm appears. NIST AI RMF’s “measure” and “manage” functions emphasize monitoring, drift detection, incident response, and continuous improvement, which is precisely how you reduce the frequency and severity of failures. A compliance GenAI can support incident resilience by guiding teams through an AI incident response playbook, helping triage severity, ensuring evidence is preserved (audit logs, prompts, outputs, approvals), and generating lessons-learned reports that connect root cause to control enhancements.

For compliance professionals, this lesson is even more stark, as the ‘customers’ of a corporate compliance program are your employees. Some key KPIs you can track include the number of AI incidents by severity tier; mean time to detect and mean time to remediate; the percentage of high-impact models with drift-monitoring and alert thresholds; and the percentage of third-party AI providers subject to change-control notification requirements.

What “Mature Governance” Looks Like When You Are Building From Scratch

Do not start with a 60-page policy. Start with a few non-negotiables that scale:

  • Inventory and classification: Create a single inventory of GenAI assistants, ML models, and automated decision systems. Classify them by impact using EU AI Act concepts (high-impact versus low-impact) and your own business context.
  • Accountability and decision rights: Assign an owner for each system and require periodic attestations for the highest-risk categories.
  • Standard artifacts: Use lightweight model documentation, data lineage notes, and disclosure templates. If it is not documented, it does not exist for governance.
  • Human oversight and logging: Define when human-in-the-loop is mandatory and ensure logs capture who approved what, when, and why.
  • Third-party AI controls: Contract for transparency, audit support, change notification, and security requirements. Vendor opacity is not a strategy.

This is where ECCP thinking helps. The question is not whether you have a policy. The question is whether the policy is operationalized, tested, and improved. That is the bridge from compliance to competitive advantage.

If you want AI compliance to be a competitive advantage, treat it like a management system that produces evidence, not like a policy library that produces comfort. When governance becomes repeatable, the business can move faster, regulators become more confident, and customers see the difference. That is not a cost center. That is credibility you can take to the bank.

Categories
Innovation in Compliance

Innovation in Compliance: Navigating Cybersecurity Compliance: From Physical Audits to AI Frameworks with Lori Crooks

Innovation is present in many areas, and compliance professionals must not only be prepared for it but also actively embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode,  host Tom Fox visits with Lori Crooks, a seasoned professional in the field of cybersecurity and audit assessments, to discuss the evolution of auditing practices from physical infrastructure to cloud and AI.

Lori shares insights from her extensive career, highlighting key federal compliance frameworks like NIST 800-53, FedRAMP, and NIST 800-171. Lori stresses the importance of proactive compliance strategies and scalable GRC programs. As AI integration accelerates, she also addresses the challenges of adapting compliance frameworks to keep pace with technological advancements and the need to foster collaboration within organizations to effectively meet regulatory requirements.

Key highlights:

  • Federal Auditing Frameworks
  • Proactive Compliance Strategies
  • Scalable GRC Programs
  • AI and Compliance Landscape
  • Future of Auditing in the Age of AI

Resources:

Lori Crooks on LinkedIn

Cadra

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Check out my latest book, Upping Your Game-How Compliance and Risk Management Move to 2023 and Beyond, available from Amazon.com.

Innovation in Compliance was recently honored as the number 4 podcast in Risk Management by 1,000,000 Podcasts.

Categories
Daily Compliance News

Daily Compliance News: August 31, 2023 – The Switzerland AML Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen to the Daily Compliance News, all from the Compliance Podcast Network. Each day, we consider four stories from the business world: compliance, ethics, risk management, leadership, or general interest for the compliance professional.

·       Goldman Sanctioned for ephemeral messaging compliance failures.  (WSJ)

·       NIST framework and AI.  (Bloomberg Law)

·       China crackdowns rip through healthcare industry corruption. (FT)

·       Switzerland unveils money-laundering crackdown. (FT)

Categories
Innovation in Compliance

Innovation in Compliance – Travis Howerton on Automating Security & Compliance

In this episode, Tom welcomes back Travis Howerton and they explore the importance of NIST 800-53 Rev. 5, the latest version of the National Institute of Standards and Technology’s security guidance for organizations. With new controls to address privacy and a heightened focus on supply chain and third-party risk, this version of the NIST standard is essential for organizations to access government contracts and revenue and is increasingly important to protect organizations from cyberattacks. Automation is also becoming increasingly necessary to help organizations meet these standards, highlighting the need for continuous improvement of security measures. This episode goes in-depth on NIST 853 Rev Five, making it a must-listen for organizations looking to stay secure and compliant.

The US government is increasingly turning to automation and AI to meet its security and compliance standards. With the transition of FedRAMP from guidance to law, companies are now required to use it and meet certain cybersecurity standards to do business with the US government. NIST 800-53 Rev. 5 addresses regulatory change around privacy with GDPR and other things and includes new control families and changes to existing ones.

As the government continues to revise its standards, the need for automation is becoming increasingly important. The National Institute of Standards and Technology (NIST), a standards body within the federal government, is working with the Open Security Controls Assessment language (OSCAL) team to develop standards. NIST has interacted closely with the OSCAL team, creating an open-source repo on GitHub and building communities of interest. Additionally, NIST works with other government agencies, tool providers, and industry to develop standards.

FedRAMP provides clarity of goal for vendors and customers but is expensive and time consuming to achieve. Cybersecurity is no longer a cost center, but a requirement to do business with the US government. The Department of Defense requires companies to meet certain cybersecurity standards to do business with them. Other agencies are taking similar stances in regard to cybersecurity. Companies are now required to have a compliance program to do business with them. Cybersecurity is now seen as one of the top risks to businesses, causing legal risk, revenue loss, and embarrassment.

Key Highlights

·      NIST 800-53 Rev. Five

·      NIST and FedRAMP

·      Cybersecurity Requirements

·      Cybersecurity Regulations

·      Continuous Improvement of Standards

 Resources

 Travis Howerton on LinkedIn

RegScale

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Compliance Into the Weeds

ChatGPT for the Compliance Professional

The award winning, Compliance into the Weeds is the only weekly podcast which takes a deep dive into a compliance related topic, literally going into the weeds to more fully explore a subject. In this episode, Matt and I take a deep dive into ChatGPT, a natural language processing tool that works by indexing every piece of written content on the Internet. We discuss the impact of the Biden administration’s proposals for AI and discusses NIST’s voluntary AI framework and  the utility of chat GPT in the workplace. What should your organization consider about incorporating AI into both their shipping decisions and mission-critical processes. If you’re interested in efficient and advanced AI technology, you don’t want to miss this episode.

Key Highlights Include

  • Impact of Chat GPT on Jobs -The Quality of Chat CPG for non-English Speakers
  • The Biden Administration’s Nonbinding Guidelines for Artificial Intelligence.
  • The Benefits of Adopting a Voluntary AI Framework by NIST for Defense Contractors
  • The Impact of Artificial Intelligence on Shipping and Work Processes

 Notable Quotes

  1. “Chat GPT can answer pretty much anything. It won’t necessarily tell you where it is getting this information. It will just give you information pretty much like the way Tom, I am answering your question right now. Just imagine text-based bot answering those questions in the same way. That’s what it is.”
  2. “Will it make your job easier? Probably for a lot of people who struggle to come up with written content. Yes, it could. But specifically then for compliance officers and let’s bring it back to what matters for our audience. We’ll chat GPT as used by others make my job harder. Compliance officers. Now I think, actually, you have a lot to worry about there, and we could get into that.”
  3. “But I just view this as a huge boom to anyone who is interested in research, anyone who is interested in learning, can’t replace the weekly and business journalist, Matt. So you’re good to go at Radical Compliance.”
  4. “But you have identified really, I think, the heart of the problem that compliance officers need to think about now. Because to me, it’s just 1 more tool.”