Categories
AI Today in 5

AI Today in 5: April 10, 2026, The Missing Signals Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Biggest defense against AI–trust. (FT)
  2. Missing signals in AI compliance. (FinTech Global)
  3. Why AI-first compliance programs fail. (Wolters Kluwer)
  4. The risks of AI-driven hiring. (Staffing Industry Analysts)
  5. AI as a competitive necessity. (Healthcare IT News)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Blog

Ongoing Monitoring: Why AI Governance Begins After Launch

In this blog post, we turn to the fourth major governance challenge in AI: ongoing monitoring. This is one of the most persistent weaknesses in AI governance. Organizations may build an intake process. They may create an approval committee. They may conduct risk reviews, privacy assessments, and validation testing before launch. All of that is important. But it is not enough.

AI risk does not freeze at the moment of approval. It changes over time. Use cases evolve. Employees adapt tools in unexpected ways. Vendors modify models. Controls weaken in practice. Regulatory expectations shift. What looked reasonable at launch may become inadequate six weeks later.

That is why ongoing monitoring is not an optional enhancement to AI governance. It is a core governance requirement. For boards and CCOs, the central question is not simply whether the company approved AI responsibly. It is whether the company has the discipline to govern it continuously once it is in the wild.

Approval Is Not Governance

One of the great temptations in AI governance is to confuse approval with control. A business unit proposes a use case, a committee reviews it, guardrails are listed, and the tool goes live. At that point, many organizations behave as though the governance work is largely complete. It is not.

Approval is a moment. Governance is a process. The problem is that companies often put their best people, clearest thinking, and highest scrutiny into the approval stage, then shift immediately into operational mode without building the same discipline around post-launch oversight. That leaves management blind to how the system actually performs under real-world conditions.

The Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) is especially instructive here. The ECCP does not ask merely whether a company has policies on paper. It asks whether the program works in practice, whether controls are tested, whether issues are investigated, and whether lessons learned are incorporated back into the compliance framework. AI governance should be viewed through the same lens. The question is not whether a control was described at launch. The question is whether that control continues to function and whether management would know if it stopped.

Why AI Risks Change After Launch

Post-deployment risk in AI does not arise because management failed to care on Implementation Day. It arises because AI systems operate in dynamic environments. A model may begin to drift as conditions change. A tool approved for one limited purpose may gradually be used for broader or higher-risk decisions. Employees may find workarounds that bypass the intended controls. Human reviewers may begin by scrutinizing outputs closely but, over time, may become overconfident, overloaded, or simply too reliant on the system. Vendors may update underlying functionality without the company fully appreciating the consequences. New regulations or regulatory interpretations may alter the risk landscape. Inputs may change. Outputs may become less reliable. Bias may surface in ways not identified in initial testing.

In other words, AI governance risk is not static. It is operational. That is why boards and CCOs must resist the notion that initial approval is the hardest part. In many respects, ongoing monitoring is harder because it requires sustained attention, clear metrics, escalation discipline, and the willingness to revisit prior assumptions.

The Governance Question

After implementation, the governance question changes. It is no longer simply, “Was this use case approved?” It becomes, “Is the use case still operating as expected, within risk tolerance, and under effective control?” That sounds simple, but it requires a much more mature oversight model than many companies currently have. It requires management to define what should be monitored, how frequently, by whom, and what changes or anomalies trigger escalation. It requires a reporting structure that does not simply celebrate adoption or efficiency gains, but surfaces incidents, deviations, near misses, and control fatigue.

For the board, the challenge is to insist on post-launch visibility. Board reporting on AI should not end with inventories and implementation updates. It should include information about ongoing performance, exception trends, complaints, incidents, validation results, vendor changes, policy breaches, and remediation efforts. A board that hears only that AI adoption is accelerating may not hear that AI governance is working.

For the CCO, the challenge is even more immediate. Compliance must ask whether the organization is gathering evidence that controls continue to function in practice. If it is not, then the governance program is still immature, no matter how polished its approval process may appear.

Monitoring What Matters

It all begins by identifying the right things to monitor. This cannot be a generic exercise. Monitoring should be tied to the specific use case, its risk classification, and its control environment. But there are some recurring categories that boards and CCOs should expect to see.

  1. Performance should be monitored. Is the tool still delivering outputs that are accurate, reliable, and appropriate for the intended purpose? Have error rates changed? Are there signs of drift or degraded quality?
  2. Control effectiveness should be monitored. Are human review requirements actually being followed? Are approval restrictions, access controls, or usage limitations still operating as designed? Is there evidence that employees are bypassing or weakening controls?
  3. Incidents and complaints should be monitored. Has the tool produced problematic results? Have customers, employees, or managers raised concerns? Have there been internal reports about bias, inaccuracy, misuse, or confidentiality risks?
  4. Changes in scope should be monitored. Is the tool still being used for the original purpose, or has it drifted into new contexts? Scope creep is one of the oldest compliance problems in business, and AI is no exception.
  5. External change should be monitored. Has a vendor updated the model? Have relevant laws, guidance, or industry expectations changed? Has a new regulatory concern emerged that requires reevaluation?

This is where the NIST AI Risk Management Framework is especially useful. NIST emphasizes that organizations must govern, measure, and manage AI risk over time, not simply identify it once. ISO/IEC 42001 reaches the same conclusion from a management systems perspective by requiring continual improvement, internal review, and adaptive controls. Both frameworks point to the same truth: effective AI governance is iterative, not episodic.

The CCO’s Role in Governance

For compliance professionals, ongoing monitoring is where the AI governance conversation becomes most familiar. This is where the CCO brings real institutional value. Compliance understands that controls weaken over time. Training decays. Workarounds emerge. Policies lose operational traction. Reporting channels capture issues others do not see. Root cause analysis matters. Corrective action must be tracked to closure. These are not new lessons. They are the daily work of compliance. AI gives them a new domain.

The CCO should insist that AI use cases have documented post-launch monitoring plans. These should identify the responsible owner, the metrics to be reviewed, the review frequency, the escalation triggers, and the process for documenting findings and remediation. High-risk use cases should not be left to passive observation. They should be actively governed.

The CCO should also ensure that AI monitoring is connected to the broader compliance ecosystem. Employee concerns raised through speak-up channels may reveal issues with the model. Internal investigations may expose misuse. Third-party due diligence may uncover changes to vendors. Training gaps may explain repeated incidents. AI governance should not be isolated from these functions. It should be integrated with them.

This is also where the CCO can most effectively help the board. Rather than presenting AI as a series of isolated technical matters, the CCO can frame post-launch governance in familiar compliance terms: monitoring, testing, escalation, remediation, and lessons learned.

Board Practice: Ask for More Than Adoption Metrics

One of the most important disciplines boards can develop is to stop mistaking usage information for governance information.

Management may report that AI adoption is growing, that productivity gains are material, or that pilot programs are expanding. Those data points may be relevant, but they are not a form of governance assurance. A board should want to know whether controls are operating, whether incidents are increasing, whether certain business units generate more exceptions, whether human review remains meaningful, and whether management has paused or modified any use cases based on real-world experience.

This is where board oversight becomes genuinely valuable. When the board asks for evidence of ongoing monitoring, it changes management behavior. It signals that AI success will not be measured solely by speed or efficiency, but also by discipline and resilience.

Boards should also ensure that high-risk use cases receive enhanced visibility. Not every AI tool merits the same level of board attention. But where AI affects regulated interactions, employment decisions, sensitive data, financial reporting, significant customer outcomes, or reputationally sensitive functions, ongoing board-level reporting should be expected.

Escalation and Remediation Must Be Built In

Monitoring matters only if it leads to action. There must be clear escalation and remediation protocols. When a material issue emerges, who gets notified? Can the use case be paused? Who determines whether the problem is technical, operational, legal, or cultural? How are facts gathered? How are corrective actions assigned? When is the board informed? How is the lesson fed back into policy, training, vendor management, or approval standards?

These processes should not be improvised. They should be documented. The organization should know in advance which incidents require escalation, which temporary controls may be imposed, and how remediation is tracked.

This is another place where the ECCP provides a useful governance model. DOJ expects companies not only to identify misconduct but also to investigate it, understand its root causes, and implement improvements that reduce the risk of recurrence. AI governance should work the same way. If a model fails or a control weakens, management should not merely fix the immediate problem. It should ask what the failure reveals about the program itself.

Documentation Is the Proof

As with every other element of effective governance, documentation is what turns intention into evidence. Post-launch AI governance should generate records that demonstrate monitoring occurred, issues were surfaced, escalations were handled, and remediation was completed. That may include performance reviews, validation updates, incident logs, committee minutes, complaint summaries, control testing records, vendor change notices, and corrective action trackers.

Without such documentation, management may believe it is effectively monitoring AI, but it will struggle to prove it to internal audit, regulators, or the board. More importantly, it will struggle to learn from experience in a disciplined way. A company that documents ongoing monitoring creates institutional memory. It can compare use cases, detect patterns, and refine its oversight model over time. That is how governance matures.

AI Governance Starts After Launch

The hardest truth in AI governance may be this: launching the tool is often the easiest part. The real challenge begins afterward. That is when optimism meets operational reality. That is when human reviewers become tired. That is when vendors update products. That is when regulators begin asking harder questions. That is when small problems become visible, or invisible, depending on whether the company has built a monitoring system capable of finding them.

For boards and CCOs, this is where governance earns its name. If the organization can monitor, escalate, remediate, and improve, then AI oversight has substance. If it cannot, then the company has not really governed AI at all. It has only been approved.

In the next and final blog post in this series, I will turn to the fifth governance challenge: culture, speak-up, and human judgment, because in many organizations, the first people to see an AI problem will not be the board, the CCO, or the governance committee. It will be the employee closest to the work.

Categories
Pod and Port

Pod and Port: Podcasting, Social Media and Yacht Rock – AI, Authenticity, Instagram, and Christopher Cross

In the debut episode of Pod & Port: Podcasting, Social Media and Yacht Rock, Tom Fox and Jeff Dwoskin dive into one of the biggest questions facing creators, marketers, podcasters, and business owners today: how do you use AI and social media tools effectively without losing authenticity?

Tom and Jeff discuss Instagram, creator monetization, transparency, algorithmic control, and the dangers of relying on AI for generic content. Their central message is clear: AI can be a powerful tool, but it should enhance your creativity, not replace it. The conversation offers practical insights into how creators can think about content, voice, originality, and audience trust.

Then the show shifts into Yacht Rock mode, with Jeff leading a spotlight on Christopher Cross, one of the genre’s defining voices. From “Ride Like the Wind” to “Sailing” and beyond, Tom and Jeff reflect on Cross’s impact, his remarkable success, and why his music still resonates. If you care about smarter content creation and smooth musical memories, Episode 1 has you covered.

Key takeaways:

  • AI works best when it enhances your ideas rather than replacing your creativity.
  • Authenticity still matters, and audiences can often sense when content feels overly automated.
  • Social media platforms may offer more tools, but creators still need to stay grounded in their own voice.
  • Transparency and trust remain critical for audience engagement.
  • Christopher Cross remains one of the essential artists in any Yacht Rock conversation.

Resources:

Jeff

Jeff Dwoskin on LinkedIn

Stampede Social Website

Christopher Cross on Spotify

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Daily Compliance News

Daily Compliance News: April 9, 2026, The FCPA Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Federal judge to dismiss FCPA conviction. (National Today)
  • Smartmatic FCPA prosecution. (Law Fare Media)
  • Top 10 International ABC developments from March. (MOFO)
  • AI goes on charm offensive. (WSJ)
Categories
AI Today in 5

AI Today in 5: April 9, 2026, The Mythos Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Human in the loop as the ultimate moat. (FastCompany)
  2. AI washing in compliance. (FinTechGlobal)
  3. AI is accelerating cyber attacks. (BankInfoSecurity)
  4. AI and virtual care in eye healthcare. (UM)
  5. Is Anthropic’s Mythos dangerous? (The Economist)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
AI Today in 5

AI Today in 5: April 8, 2026, The AI in Professional Services Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. AI is increasing social engineering scams. (FT)
  2. Advancing compliance efficiency with AI. (Yahoo!Finance)
  3. AI governance really matters. (HR Brew)
  4. Privacy and AI. (BlufftonToday)
  5. AI to automate professional services. (FinTechGlobal)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Daily Compliance News

Daily Compliance News: April 7, 2026, The Corporate Retreat from Hell Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • AI in auditing. (FT)
  • Trump to cut 9400 TSA positions. (Reuters)
  • Germany uncovers €300 payments scandal. (Bloomberg)
  • When a corporate retreat goes wrong, very wrong. (WSJ)
Categories
Blog

Five Corporate Governance Challenges in AI: A Roadmap for CCOs and Boards

AI is not simply a technology deployment question. It is a corporate governance challenge that requires board attention, compliance discipline, and operational oversight. For Chief Compliance Officers and board members, the task is not merely to encourage innovation, but to ensure that innovation is governed, monitored, and aligned with business values and risk tolerance.

Artificial intelligence has moved from pilot projects and innovation labs into the bloodstream of the modern corporation. It now touches customer service, finance, procurement, HR, sales, third-party management, internal reporting, and strategic decision-making. That expansion is why AI can no longer be treated as a narrow IT issue. It is a governance issue. More particularly, it is a governance issue with compliance implications at every lifecycle stage.

For compliance professionals, that means AI is not simply about whether a model works. It is about whether the organization has built the structures, accountability, and culture to use AI responsibly. For boards, it means AI oversight can no longer be delegated away with a cursory quarterly update. The board must understand not only where AI is being used, but whether the company’s governance architecture is fit for purpose.

This is the first post in a series examining the five most important corporate governance issues around AI. They are not exotic or theoretical. They are the same types of governance challenges compliance professionals have seen before in other contexts: ownership, control design, data integrity, monitoring, and culture. AI raises the stakes and accelerates the timeline.

1. Board Oversight and Accountability

The first challenge is the most fundamental: who is actually in charge?

One of the great failures in governance is diffuse accountability. When everyone has some responsibility, no one has real responsibility. AI governance suffers from this problem in many organizations. Legal is concerned about liability. IT is focused on systems. Security is focused on cyber risk. Privacy is focused on data usage. Compliance is focused on controls and conduct. Business leaders are focused on speed and competitive advantage. The board hears fragments from all of them, but may not receive a coherent picture.

That is a dangerous place to be. AI governance begins with clear ownership. The board should know who is accountable for enterprise AI governance, how decisions are escalated, and how high-risk use cases are reviewed. A company does not need bureaucracy for its own sake, but it does need clarity.

This is where the Department of Justice’s Evaluation of Corporate Compliance Programs remains instructive, even if AI is not its exclusive focus. The ECCP repeatedly asks whether compliance is well designed, adequately resourced, empowered to function effectively, and tested in practice. Those same questions apply directly to AI governance. If accountability for AI is vague, if compliance is not in the room, or if oversight is not documented, governance will be performative rather than operational.

2. Strategy Outrunning Governance

The second challenge is one many companies know all too well: innovation is sprinting ahead while governance is still tying its shoes.

Business teams are under enormous pressure to deploy AI quickly. Senior leadership hears daily that AI can deliver efficiency, productivity, growth, and competitive advantage. Vendors promise transformation. Employees experiment informally. In that environment, governance can be cast as friction.

But good governance is not the enemy of innovation. It is what keeps innovation from becoming unmanaged exposure.

The central question here is simple: has the company defined the rules of the road before putting AI into production? In practical terms, has it determined which use cases are permissible, which require enhanced review, which are prohibited, and which must go to the board or a designated committee? Has it established approval criteria, documentation standards, and stop/go decision points?

The NIST AI Risk Management Framework is especially helpful on this point because it treats AI governance as an ongoing management discipline rather than a one-time sign-off. Its emphasis on Govern, Map, Measure, and Manage is a powerful reminder that strategy and governance must move together. ISO/IEC 42001 brings similar discipline by framing AI management systems around structure, accountability, controls, and continual improvement.

The lesson for compliance professionals is clear: if the business has a faster process for buying or launching AI than for reviewing risks and governance, it has already fallen behind.

3. Data Governance, Privacy, and Model Integrity

The third challenge is the quality and integrity of what goes into, and comes out of, AI systems.

AI does not operate in a vacuum. It depends on data, assumptions, training inputs, prompts, workflows, and human interaction. That means weaknesses in data governance are not side issues. They are central governance risks. Poor data lineage, unvalidated data sources, confidentiality breaches, inadequate access controls, and bias in training data can all create downstream failures that become legal, reputational, regulatory, and operational events.

For boards, the temptation is to hear “AI” and think about futuristic questions. But the more immediate concern is often much more familiar. Does management know where the data came from? Does the company understand whether sensitive or proprietary information is being exposed? Are outputs accurate enough for the intended use? Are the controls around data usage consistent with privacy obligations and internal policy?

This is where AI governance intersects with traditional compliance disciplines in a very real way. Privacy, information governance, records management, cybersecurity, and internal controls all converge here. A system that produces impressive outputs but relies on flawed or unauthorized data is not a governance success. It is a governance failure waiting to be discovered.

ISO 42001 is particularly useful because it forces organizations to think in systems terms. It is not merely about the model itself; it is about the management environment surrounding it. That is exactly how boards and CCOs should think about model integrity.

4. Ongoing Monitoring and the “Day Two” Problem

The fourth challenge is the one that too many organizations underestimate: governance after deployment. A great many companies put substantial effort into approving an AI use case, but far less into monitoring it once it is live. Yet this is where some of the greatest risks emerge. Models drift. Employees use tools for new purposes. Controls that looked solid on paper weaken in practice. Reviewers become overloaded. Risk profiles change. Regulators evolve their expectations. The use case expands far beyond its original design.

That is why AI governance must include what I call the “Day Two” problem. What happens after launch? This is once again a place where the ECCP offers a useful lens. The DOJ does not ask merely whether a policy exists. It asks whether it works in practice, whether it is tested, and whether lessons learned are incorporated back into the program. AI governance should be held to the same standard. If the company has no way to monitor performance, investigate anomalies, log incidents, revalidate assumptions, or update controls, then it lacks effective AI governance. It has an approval memo.

The board should be asking for reporting that goes beyond usage metrics or efficiency gains. It should want to know about incidents, exception trends, control failures, validation results, and remediation efforts. In other words, governance must be dynamic because AI risk is dynamic.

5. Culture, Speak-Up, and Human Judgment

The fifth challenge may be the most overlooked, yet it is often the earliest warning system a company has: culture. Employees will usually see AI failures before leadership does. They will spot the odd output, the customer complaint, the biased result, the misuse of a tool, the shortcut around a control, or the inaccurate summary that could trigger a bad decision. The question is whether they will say something.

This is why AI governance is not solely about structure and policy. It is also about whether the organization has a culture that encourages people to raise concerns. Do employees understand that AI-related problems are reportable? Do they know where to raise them? Are managers trained to respond properly? Are anti-retaliation protections reinforced in this context?

Human judgment also matters because AI does not eliminate accountability. If anything, it heightens the need for judgment. A machine-generated output can create a false sense of confidence, especially when it arrives quickly and sounds authoritative. Boards and CCOs must resist that temptation. Human oversight is not a ceremonial step. It is an essential governance control.

The strongest AI governance programs will be the ones that connect structure with culture. They will not merely create committees and frameworks. They will create an environment where people trust the system enough to challenge it.

The Governance Road Ahead

For CCOs and boards, the governance challenge around AI is not mysterious. It is demanding, but it is not mysterious. The questions are recognizable. Who owns it? What are the rules? Can we trust the data? Are we monitoring the system over time? Will people speak up when something goes wrong?

These five issues form the roadmap for the series ahead. In the coming posts, I will take up each one in turn and explore what it means in practice for modern compliance programs and board oversight. Because if there is one lesson here, it is this: AI governance is not about admiring the technology. It is about governing the enterprise that uses it.

Join us tomorrow, where we review board oversight and accountability, because that is where every effective AI governance program either starts strong or starts to fail. 

Categories
AI Today in 5

AI Today in 5: April 6, 2026, The AI in Healthcare Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. AI risks for auto lenders. (AutoNews)
  2. Moving beyond AI pilots. (Boston University)
  3. AI readiness and legal compliance. (ITPro)
  4. Banks must test AI beyond legal thresholds. (QAFinancial)
  5. AI in healthcare. (FoxNews)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
AI Today in 5

AI Today in 5: April 3, 2026, The Good Friday Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. AI-driven identity and compliance. (ComputerWeekly)
  2. AI and compliance. (ChannelPro)
  3. The Enterprise AI readiness gap. (PYMNTS)
  4. AI’s healthcare test. (Inc42)
  5. BoA is replacing meetings with AI. (FinTechMagazine)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.