Categories
AI in Healthcare

AI in Healthcare: Five Healthcare AI Stories You Need to Know This Week – April 17, 2026

Welcome to AI in Healthcare in 5 Stories. This podcast is a Weekly Briefing of the five most important AI developments shaping healthcare, medicine, and life sciences. Each week, Tom Fox breaks down the latest stories on clinical innovation, regulation, privacy, compliance, patient safety, and operational transformation through a practical, business-focused lens. Designed for healthcare compliance professionals, executives, legal teams, clinicians, and industry leaders, the podcast moves beyond headlines to explain what each development means in the real world.

The top five stories for the week ending April 17, 2026, include:

  1. Why are so many Americans using AI in healthcare? (PBS News)
  2. AI requires a rethinking of healthcare architecture. (Stat News)
  3. Study finds AI misdiagnoses up to 80% of early cases. (FT)
  4. In AI, where is your PII stored? (HealthcareFinance)
  5. Increasing enforcement around AI in healthcare. (HealthcareITNews)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Blog

AI as a Force Multiplier for Compliance: From Efficiency Tool to Program Effectiveness

There is a temptation in every wave of new technology to focus first on speed. How much faster can we do the work? How many hours can we save? How many tasks can we automate? Yet for the compliance professional, those are not the right first questions. The right first question is always: does this make our compliance program more effective?

That is why the recent Moody’s discussion of GenAI is so interesting when viewed through a compliance lens. The article describes AI not simply as a productivity engine, but as a tool that changes how professionals interact with information, generate insights, and support decision-making. It emphasizes workflow transformation, role-based support, auditability, data quality, and the need for governance and human oversight . For compliance officers, that is the real story. AI can indeed make work faster. But its true promise is that it can make compliance more targeted, more consistent, more responsive, and more operationally embedded.

The Department of Justice has been telling us for years, through the Evaluation of Corporate Compliance Programs (ECCP), that effectiveness is the standard. The questions are not whether a company has a policy on the shelf or a training module in the system. The questions are whether the company has access to data, whether it uses that data, whether controls are tested, whether issues are triaged appropriately, whether lessons learned are fed back into the program, and whether the program evolves as risks change. AI, properly governed, can help answer yes to each of those questions.

AI and the Compliance Program of the Future

The Moody’s paper notes that GenAI is moving from passive, knowledge-based support toward more action-oriented solutions that can assist with complex, multi-step workflows . That observation should resonate with every Chief Compliance Officer. The future is not an AI toy that drafts emails. The future is an AI-enabled compliance architecture that helps the function move from reactive to proactive.

Consider third-party due diligence. Most compliance teams still struggle with volume, fragmentation, and prioritization. Information sits in onboarding questionnaires, sanctions screens, beneficial ownership reports, payment histories, audit findings, hotline allegations, and open-source media. The challenge is not merely gathering that information. The challenge is turning it into risk-based action. AI can help synthesize disparate information sources, surface red flags, identify missing documentation, and create a more coherent risk picture. Under the ECCP, that supports a more thoughtful, risk-based approach to third-party management.

Take investigations triage. Every mature speak-up program faces the same problem: how to distinguish between the urgent, the important, and the routine. AI can help sort allegations by subject matter, geography, potential legal exposure, prior related issues, implicated business units, and urgency indicators. That does not mean AI decides guilt, materiality, or discipline. It means AI helps compliance direct scarce investigative resources where they matter most. In ECCP terms, it strengthens case handling, responsiveness, consistency, and root-cause readiness.

Now think about risk assessment. The best compliance risk assessments are dynamic, not annual rituals. AI can assist in identifying patterns across reports, controls failures, investigation outcomes, gifts and entertainment data, third-party activity, and regulatory developments. It can help compliance professionals see concentrations of risk earlier and with greater context. In a program built around continuous improvement, that is a force multiplier.

Effectiveness, Not Mere Automation

One of the most important lessons from the Moody’s article is that the value of AI lies in supporting higher-value analytical work, not just reducing routine effort. That is exactly how compliance leaders should approach deployment.

Transaction monitoring is a good example. Many organizations already use rules-based systems, but these often produce high volumes of noise. AI can support better prioritization, pattern recognition, and anomaly detection. It can help identify clusters of conduct that might otherwise remain hidden across vendors, employees, geographies, or payment channels. But the point is not simply to clear alerts faster. The point is to make the monitoring program smarter, more risk-based, and more defensible.

The same is true in training and communications. Too much compliance training remains generic, static, and detached from actual risk. AI opens the door to role-based, scenario-based, and even timing-based communications. A sales team in a high-risk market should not receive the same examples as procurement professionals dealing with third parties. A manager with hotline escalation responsibilities should not receive the same training as a new hire. AI can help tailor content, refresh scenarios, and improve accessibility. Under the ECCP, that supports effectiveness in training design, communications, and accessibility of guidance.

Speak-up and case management also stand to benefit. AI can help identify repeat issue patterns, detect retaliation indicators, cluster similar allegations, and flag unresolved themes across regions or functions. Done correctly, it can help compliance move from case closure to issue intelligence. That is where a hotline becomes not just a reporting channel but an early warning system.

Governance Is the Price of Admission

Here is where the compliance professional earns his or her stripes. The Moody’s piece is explicit that none of this works without robust governance, trustworthy data, transparency, documentation, validation, and human expertise remaining central to critical decisions . That is the bridge to both the NIST AI Risk Management Framework (NIST AI RMF) and ISO/IEC 42001.

NIST AI RMF gives compliance teams a practical way to think about governance, mapping, measurement, and management. ISO/IEC 42001 provides a management-system structure for implementing AI governance in an enterprise setting. Together with the ECCP, they provide a powerful architecture. The ECCP asks whether your compliance program works. NIST AI RMF helps define and manage AI risk. ISO/IEC 42001 helps operationalize governance and accountability.

What does that mean on the ground for  your compliance regime?

It means every AI use case in compliance should have a defined business purpose, an identified owner, approved data sources, documented limitations, escalation criteria, testing protocols, and monitoring for drift or unintended consequences. It means AI outputs should be reviewable. It means prompt logs, source provenance, and validation results should be retained where appropriate. It means employees should know when they are permitted to rely on AI and when human review is mandatory. It means there must be clear boundaries around privacy, privilege, confidentiality, bias, and record retention.

Most of all, it means compliance should resist the easy sales pitch that AI is a substitute for professional judgment. It is not. It is a force multiplier for judgment.

The Board and Senior Management Imperative

Boards and senior leaders should be asking a straightforward question: are we using AI to make compliance more effective, or are we simply using it to do old tasks faster? Those are not the same thing. A mature answer would include at least five elements. First, a risk-based inventory of compliance AI use cases. Second, governance over data quality and model performance. Third, defined human-review thresholds for consequential decisions. Fourth, ongoing monitoring and periodic validation. Fifth, a feedback loop so lessons from investigations, audits, and operations improve the system over time.

That is very much in line with both the ECCP and the Moody’s article’s emphasis on verifiable data, decision auditability, and governance at scale.

Five Lessons Learned

  1. Start with effectiveness, not efficiency. If AI only helps you do low-value tasks faster, you have not transformed compliance. Use it where it improves risk identification, triage, analysis, and action.
  2. Build around the ECCP. The DOJ already gave compliance professionals the framework. Use AI to strengthen risk assessment, third-party management, investigations, training, and continuous improvement.
  3. Govern the data before you celebrate the tool. Bad data, undocumented prompts, or unvalidated outputs will undermine trust. Governance over data provenance and output review is essential.
  4. Keep humans in the loop where it matters. AI can assist with pattern recognition, drafting, prioritization, and synthesis. It should not replace judgment on materiality, discipline, escalation, privilege, or remediation.
  5. Treat AI as part of your compliance operating model. This is not an innovation side project. It should be documented, tested, monitored, and improved like any other core compliance process.

The bottom line is this: AI offers compliance functions a genuine opportunity to become more effective, more focused, and more business relevant. But that opportunity only becomes real when it is grounded in governance, disciplined by the ECCP, and supported by frameworks like NIST AI RMF and ISO/IEC 42001. Done right, AI will not diminish the role of the compliance professional. It will elevate it.

Categories
AI Today in 5

AI Today in 5: April 16, 2026, The AI Attack Chains Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Building real-world healthcare with AI. (Crunchbase News)
  2. AI and regulatory intelligence. (FinTechGlobal)
  3. Space Force touts AI. (Cyberscoop)
  4. AI attack chains. (PYMNTS)
  5. For lawyers: your chatbots will be used against you. (Reuters)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
AI Today in 5

AI Today in 5: April 15, 2026, The Tax Day Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. What does the US National AI framework mean for healthcare? (JD Supra)
  2. AI for investigative impact. (FinTechGlobal)
  3. FinTech bets big on AI agents. (IBS Intelligence)
  4. Oracle debuts AI agents for banking. (PYMNTS)
  5. BOE urges regulators to assess cyber AI risks. (Bloomberg)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Blog

When AI Becomes Evidence of Bad Governance: What CCOs and Boards Can Learn from Fortis Advisors

The Delaware Court of Chancery has handed compliance leaders and boards a timely lesson: generative AI is not a substitute for judgment, legal discipline, or governance. When leaders use AI to validate a predetermined objective, the technology does not reduce risk. It can become powerful evidence of intent, bad faith, and control failure.

A Cautionary Tale for Corporate Leaders

The recent Delaware Court of Chancery decision in Fortis Advisors, LLC v. Krafton, Inc. should be read by every Chief Compliance Officer (CCO), board member, general counsel, and corporate deal professional. The article describing the decision recounts a dispute in which a buyer, apparently unhappy with a substantial earnout obligation, turned to ChatGPT for advice on how to escape the economic consequences of the deal. According to the court’s account, the buyer then executed an AI-generated strategy designed to renegotiate the arrangement or take control from the seller management team. The court ultimately found that the buyer had wrongfully terminated key employees, improperly seized operational control, reinstated the seller’s CEO, and extended the earnout window to restore a genuine opportunity to achieve the payout.

The Real Compliance Lesson

For compliance professionals, the most important lesson is not that AI is dangerous. The lesson is that leadership can use AI in dangerous ways when governance is absent. That is a far more important point.

Too many organizations still approach AI governance as a technology problem. They focus on model performance, cybersecurity, or procurement review. Those are important issues, but this case reminds us that AI governance begins with human purpose. What question was asked? What objective was embedded in the prompt? What controls existed before action was taken? Who challenged the proposed course of conduct? Who documented the legal and ethical analysis? Those are compliance questions. Those are board questions.

Viewing the Case Through the DOJ ECCP Lens

This is also where the DOJ’s Evaluation of Corporate Compliance Programs (ECCP) provides a useful lens. The ECCP asks whether a company’s program is well designed, adequately resourced, empowered to function effectively, and actually works in practice. Put that framework over this fact pattern, and the governance gaps become painfully clear. Was there a control around the use of generative AI in strategic or legal decision-making? Was there escalation to legal, compliance, or the board when a significant earnout exposure was at stake? Was there any meaningful challenge function, or did leadership use AI as a convenient amplifier for a business objective it had already chosen?

The case suggests the latter. That should concern every board. Generative AI can be useful in brainstorming, summarizing, and scenario testing. But when executives use it to reinforce a desired outcome, particularly one touching contractual obligations, employment decisions, or post-closing governance rights, the tool can become a mechanism for rationalizing misconduct.

When AI Chats Become Discoverable Evidence

Worse, it creates a record. The Court notes that the AI chats were not privileged, were discoverable, and vividly underscored the buyer’s efforts to avoid its legal obligations. That point alone should stop corporate leaders in their tracks.

Many executives still treat AI chats as an informal thinking space, almost like talking to themselves. That is a serious mistake. Prompt histories, outputs, internal forwarding, and downstream use can all become evidence. If employees use public or enterprise AI tools to explore termination strategies, dispute positions, or ways around contractual commitments, they may be creating exactly the documentary record that plaintiffs, regulators, and judges will later find most compelling. In other words, the issue is not simply data leakage. It is discoverability, privilege erosion, and self-generated evidence of intent.

That is why CCOs and boards need to move beyond generic AI-use policies and build governance around high-risk use cases. The question should not be, “Do we allow ChatGPT?” The question should be, “Under what circumstances can generative AI be used in decisions involving legal rights, employee discipline, regulatory exposure, strategic transactions, or board-level matters?” If the answer is unclear, the company has work to do.

The M&A and Earnout Governance Lesson

The dealmaking lesson here is equally important. Earnouts are already fertile ground for post-closing disputes because they sit at the intersection of incentives, control, and timing. Buyers often want flexibility. Sellers want protection from interference. This case illustrates what can happen when a buyer attempts to manipulate operations in a way that affects the achievement of the earnout. The court not only found wrongful interference but also equitably extended the earnout period by 258 days and preserved a further contractual right to extend, thereby materially altering the deal’s economic landscape.

That is a governance lesson hiding inside an M&A lesson. Once a company acquires a business with earnout rights and operational covenants, post-closing conduct is no longer just integration management. It is compliance management. Interference with operational control, pretextual terminations, or actions designed to suppress performance metrics can lead to litigation, destroy value, and trigger judicial remedies that boards did not expect. CCOs should therefore insist that M&A integration playbooks include compliance review of earnout governance, decision rights, escalation protocols, and documentation standards.

Five Lessons for Boards and CCOs

What should boards and compliance officers do now? Here are five lessons.

  1. Govern the objective before you govern the tool. AI is only as sound as the purpose for which it is deployed. If leadership starts with a bad objective, AI can scale the problem. Boards should require management to define prohibited uses of AI in areas such as contract avoidance, pretextual employee actions, retaliation, and legal strategy without oversight by counsel.
  2. Treat high-risk AI prompts and outputs as governed business records. If a prompt relates to litigation, terminations, regulatory response, deal rights, or board matters, it should fall within clear policies on retention, review, and escalation. Employees need to understand that AI interactions may be discoverable and may not be privileged.
  3. Embed legal and compliance into consequential AI use cases. The ECCP emphasizes whether compliance has stature, access, and authority. That principle applies directly here. Strategic uses of AI that touch contractual rights, employment decisions, or fiduciary issues should not proceed without legal and compliance review.
  4. Build AI governance into M&A and post-closing integration. Earnout structures, operational covenants, and seller management rights are precisely the areas where incentives can distort behavior. Boards should ask whether integration teams have controls preventing actions that could be viewed as interference, manipulation, or bad-faith conduct.
  5. Document challenge, not just action. A single final decision does not prove good governance. It is proved by the process surrounding it. Was there dissent? Was there an analysis? Was there an escalation memo? Was there a documented rationale grounded in law, contract, and fiduciary duty? If not, the company may be left with a record that tells the wrong story.

Governance Must Come Before AI

In the end, this case is not really about a video game company. It is about a governance failure dressed in modern technology. Leaders appear to have used AI not to improve judgment, but to reinforce a course of conduct they already wanted to pursue. That is the compliance lesson. AI does not remove the need for fiduciary discipline, legal oversight, or ethical restraint. It makes those requirements more urgent.

For boards and CCOs, the mandate is clear. Governance must come first. Because when AI is used without guardrails, it does not merely create risk; it creates it. It can become the evidence.

Categories
AI Today in 5

AI Today in 5: April 14, 2026, The AI Tastes Like Twinkies Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Kara Swisher says AI: ‘It tastes like a Twinkie. ’(Fortune)
  2. AI must move beyond name matching in sanctions. (FinTechGlobal)
  3. Healthcare needs to prepare for enforcement around AI use. (HealthcareITNews)
  4. Getting AI insurance. (CCI)
  5. Balancing AI innovation with compliance for RIAs. (FinTechGlobal)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
AI Today in 5

AI Today in 5: April 13, 2026, The AI Governance Framework Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Oracle brings storytelling to the heart of compliance with AI. (Yahoo!Finance)
  2. AI is bringing compliance to BioPharma. (PharmTech)
  3. Oracle brings AI agents to financial crime and compliance. (Financial IT)
  4. Building out your AI governance framework. (Bloomberg Law)
  5. AI developments finance pros should be tracking. (MIT)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Daily Compliance News

Daily Compliance News: April 10, 2026, The AI & Trust Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Biggest defense against AI–trust. (FT)
  • No wonder he attacked Beirut. (Reuters)
  • Applying the law will get you fired in the Trump Administration. (NYT)
  • Rooney Rule, anyone? (WSJ)

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
AI in Financial Services in 5 Stories

AI in Financial Services in 5 Stories – Week Ending April 10, 2026

Welcome to AI in Financial Services in 5 Stories. A practical weekly roundup of the five most important AI developments affecting banking, insurance, payments, asset management, and fintech. Each Friday, Tom Fox will break down the top stories that matter most through the lenses of compliance, risk management, governance, and business strategy. Designed for compliance professionals, executives, legal teams, and financial services leaders, it goes beyond headlines to explain why each development matters in a highly regulated industry. The result is a concise weekly briefing that helps listeners stay current on AI innovation while asking sharper questions about oversight, accountability, and trust.

This week’s stories include:

  1. AI is the top data security concern. (FintechNews)
  2. The perils of one-click ambition. (bobsguide)
  3. To fight financial crime, AI needs context. (FinTechMagazine)
  4. AI-driven pKYC. (FinTechGlobal)
  5. 6 AI truths from Amazon CEO. (Amazon News)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
AI in Healthcare

AI in Healthcare: Five Healthcare AI Stories You Need to Know This Week – April 10, 2026

Welcome to AI in Healthcare in 5 Stories. This podcast is a Weekly Briefing of the five most important AI developments shaping healthcare, medicine, and life sciences. Each week, Tom Fox breaks down the latest stories on clinical innovation, regulation, privacy, compliance, patient safety, and operational transformation through a practical, business-focused lens. Designed for healthcare compliance professionals, executives, legal teams, clinicians, and industry leaders, the podcast moves beyond headlines to explain what each development means in the real world.

The top five stories for the week ending April 10, 2026, include:

  1. How much can AI streamline healthcare? (Fox17)
  2. AI as a personal healthcare concierge. (Healthcare Finance)
  3. Using AI to rewire healthcare at the Cleveland Clinic. (Forbes)
  4. Risks of Shadow AI in healthcare. Fierce Healthcare)
  5. AI as a competition imperative. (HealthcareItNews)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.