Categories
The Ethics Experts

Episode 252 – Mike Halbach

In this episode of The Ethics Experts, Nick Gallo welcomes Mike Halbach.

Mike P. Halbach is a Compliance & Risk Management Advisor and Trading Expert with over 30 years of in-house experience across US-based and global industrial and commodity trading firms. His career has focused on helping trading organizations navigate complex regulatory environments while supporting commercial teams with robust, practical, and business-enabling compliance solutions.

Mike has worked extensively across physical and financial markets, including agriculture, energy, metals, and emissions, with hands-on experience in major trading venues such as CME/CBOT, MATIF, ICE (globally), BMD, SGX, and the LME.

Today, he is a Partner at Sybius Consulting, a Geneva-based boutique consultancy advising more than 30 clients worldwide — including listed producers, commodity trading firms, and quantitative hedge funds — and delivering tailored compliance frameworks and regulatory insight grounded in deep market expertise.

Connect with Mike on LinkedIn

Categories
FCPA Compliance Report

FCPA Compliance Report: Report from Compliance Week 2026 on AI Sessions

In this episode, Tom Fox takes a solo turn behind the mic to report on the AI tracks from the recently concluded Compliance Week 2026 conference.

He highlights two AI tracks: practical “creative” uses, including live demonstrations by Hemma Lomax creating PowerPoint content and Roxanne Petraeus creating video content, and the more critical compliance focus on AI governance, oversight, and accountability amid limited federal direction and a growing patchwork of state laws, with the EU AI Act positioned as a global benchmark. Tom emphasizes applying standard compliance risk management to AI (identify, manage, train, implement, monitor, improve), addressing shadow AI, internal/external/vendor risks, and building AI “in” rather than bolting it on. He notes scaling challenges, ROI questions, auditor expectations, risk registers, fraudsters’ use of AI, and ongoing discussions with Matt Kelly.

Key highlights:

  • AI Everywhere at CW
  • Creative AI Demos
  • AI Risk Framework
  • Shadow AI and Risks
  • ROI and Use Cases
  • Scaling and Oversight
  • Governance Takeaways

Resources:

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

For more information on the use of AI in compliance programs, Tom Fox’s new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com: https://a.co/d/00XNoelh.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out Tom’s latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com: https://a.co/d/05NTW4zz.

Categories
Daily Compliance News

Daily Compliance News: May 11, 2026, The Tainted by Corruption or Collusion Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Just says Musk owes $2.1bn for Twitter; SEC says $1.5MM.  (Reuters)
  • China hands suspended death sentences to former Defense Ministers. (WSJ)
  • Sri Lankan Airlines’ chief, embroiled in Airbus corruption scandal, found dead. (SCMP)
  • AI notetakers are making lawyers nervous. (NYT)

For more information on the use of AI in compliance programs, Tom Fox’s new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out Tom’s latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
AI Today in 5

AI Today in 5: May 11, 2026, The AI Notetakers Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  • 7 steps for AI compliance in hiring. (JD Supra)
  • AI and real-time risk visibility in insurance. (FinTech Global)
  • Make more strategic bets on AI in healthcare. (Fierce Healthcare)
  • AI is not taking jobs; it is much more nuanced than that. (CNN)
  • AI notetakers are making lawyers nervous. (NYT)

For more information on the use of AI in compliance programs, Tom Fox’s new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out Tom’s latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Blog

Compliance Week 2026: AI Governance Highlights

The 21st Annual Compliance Week Conference made one point unmistakably clear: AI is no longer a technology issue sitting outside the compliance function. It is now a governance, risk, controls, culture, and accountability issue. Across the conference, AI appeared in nearly every discussion, from practical tools for compliance teams to regulatory uncertainty, shadow AI, third-party risk, and board oversight. The central message for compliance professionals was clear: AI must be governed with the same discipline, documentation, monitoring, and continuous improvement as any other enterprise risk.

That should not surprise any Chief Compliance Officer. The DOJ’s Evaluation of Corporate Compliance Programs (2024 ECCP) has long asked whether a compliance program is well-designed, adequately resourced, empowered to function effectively, and working in practice. Those same questions now apply to AI. The issue is not whether an organization is using AI. It almost certainly is. The issue is whether the company knows where AI is being used, who approved it, the risks it creates, the controls that apply, and whether those controls are being monitored.

AI Is Now a Compliance Governance Issue

The first major theme from Compliance Week 2026 was governance. AI may be exciting, efficient, and creative, but without governance, it can quickly become a source of unmanaged enterprise risk. That governance challenge begins with oversight. Who owns AI risk? Who approves AI use cases? Who determines whether a tool is appropriate for use with company data? Who has the authority to stop an AI project that is not meeting its stated purpose? These are not theoretical questions. They are the basic operating questions of an effective compliance program.

A company should not treat AI as a series of disconnected experiments. It should treat AI as part of the enterprise control environment. That means clear governance structures, documented approvals, defined risk owners, escalation protocols, monitoring, testing, and board reporting. The board does not need to become a group of AI engineers. But directors do need to understand whether management has created a defensible AI governance framework. They should ask how AI risks are identified, how high-risk use cases are reviewed, how third-party AI vendors are assessed, and how the company detects unauthorized AI use.

Shadow AI Is the Risk Hiding in Plain Sight

One of the strongest compliance lessons from the conference was the danger of shadow AI. Employees are already using AI tools, often because they are efficient, accessible, and easy to deploy. The problem is that ease of use can defeat governance. If employees are using ChatGPT, Claude, Gemini, Copilot, or other tools without authorization, training, or data restrictions, the company has a control gap. Confidential business information, financial data, personal information, customer information, or regulated data can move into systems the company does not control. That creates legal, privacy, cybersecurity, contractual, and reputational risk.

The answer is not simply to prohibit AI. That approach is unlikely to work. The better answer is to identify the tools being used, classify them by risk, authorize appropriate use cases, train employees, monitor usage, and make clear what data can and cannot be entered into an AI system. A strong AI governance program should include an AI use register. It should identify approved tools, owners, business purposes, data categories, risk ratings, controls, monitoring obligations, and renewal or reassessment dates. Without that inventory, a company cannot credibly claim to govern AI risk.

The Compliance Risk Management Model Already Works

One of the most important insights from the conference was that compliance professionals already have the right risk management framework. AI risk does not require abandoning the compliance discipline. It requires applying it.

The framework is familiar. Identify the risk. Develop a risk management strategy. Train employees. Implement the strategy. Monitor performance. Use data to improve your strategy continuously. That is the compliance operating model. It is also the right model for AI governance.

The 2024 ECCP emphasized risk-based compliance, data access, continuous improvement, and the effectiveness of controls in practice. Those expectations fit naturally into AI governance. A company should ask whether its AI controls are designed around actual risks, whether compliance has access to AI-related data, whether employees understand acceptable use, and whether the company can prove that its controls operate effectively. The lesson is straightforward. Do not build AI governance as a technology policy alone. Build it as a compliance program.

AI Risk Has Three Core Dimensions

The conference also highlighted the need to separate AI risk into practical categories. For compliance officers, three risk areas deserve immediate attention.

First, internal risk. This includes employee use of AI, shadow AI, unauthorized tools, misuse of confidential information, lack of training, and gaps in approval processes.

Second, external risk. This involves AI systems that affect customers, patients, consumers, investors, or other external stakeholders. These tools may raise issues involving fairness, privacy, transparency, discrimination, consumer protection, and regulatory obligations.

Third, third-party risk. Vendors, consultants, service providers, and sales agents may introduce AI into the company’s operations. A third-party vendor using AI in screening, analytics, customer service, data processing, or decision support can pose a risk to the company, even when the company did not build the tool.

This is where compliance must bring discipline. Third-party AI risk should be part of due diligence, contracting, audit rights, monitoring, and renewal. Companies should ask vendors what AI tools they use, what data those tools process, whether subcontractors are involved, how outputs are validated, and whether the company has audit rights over AI-related controls.

ROI Must Begin With the Business Purpose

AI projects should begin with a simple question: what problem are we trying to solve? Too many AI initiatives begin with pressure to “use AI” rather than a clear business case. That is not governance. That is technology enthusiasm without control or discipline. A compliance-minded AI review should ask whether the proposed tool has a defined use case, measurable business value, appropriate controls, and a clear owner. It should also ask whether the project is drifting from its original purpose. Mission creep is a real AI risk. A tool approved for one purpose can quickly be used for another. That creates new risks and may invalidate the original approval.

The more regulated the use case, the more important this analysis becomes. AI used in healthcare, employment, finance, consumer decisions, investigations, sanctions screening, or third-party risk management demands heightened scrutiny. ROI may not always appear as a direct financial return. Sometimes the business value is avoiding regulatory exposure, improving consistency, strengthening documentation, or reducing unmanaged risk.

Training Is No Longer Optional

AI training must move beyond general awareness. Employees need practical, role-based instruction. They need to know which tools are approved. They need to know what data is prohibited. They need to understand when human review is required. They need to know how to report AI concerns, errors, bias, hallucinations, or misuse. They also need to understand that AI output is not a substitute for professional judgment.

For compliance teams, training should include investigators, auditors, third-party managers, procurement, legal, finance, HR, IT, and business leaders. The message should be clear: AI can support the work, but it does not remove accountability.

Build AI In, Do Not Bolt It On

One of the most practical insights from the conference was that AI should be built into business processes, not bolted on afterward. That distinction matters. Bolted-on AI becomes a tool without governance. Built-in AI becomes part of the control environment.

For example, in third-party risk management, AI can help analyze due diligence responses, identify red flags, monitor adverse media, track contract obligations, and support ongoing risk scoring. But it must be embedded into a process with human oversight, escalation protocols, audit trails, and testing. The same applies to investigations, hotline analytics, policy management, training, and monitoring. AI should strengthen compliance processes, not bypass them.

The CCO Must Have a Seat at the AI Table

The compliance function should not wait to be invited into AI governance. It should claim its role. The CCO brings the language of risk, controls, accountability, documentation, monitoring, and culture. Those are precisely the disciplines AI governance requires. Compliance should help design AI approval workflows, risk assessments, training, third-party reviews, monitoring plans, and board reporting.

This does not mean compliance owns every AI decision. It means compliance must be part of the governance architecture. AI governance should be cross-functional, with legal, compliance, IT, privacy, cybersecurity, internal audit, procurement, HR, and the business working together. But compliance must ensure that the program is not simply innovative. It must be defensible.

Practical Takeaways for Compliance Professionals

  1. Create an AI inventory. Know what tools are being used, by whom, for what purpose, and with what data.
  2. Establish an AI governance committee. Include compliance, legal, IT, privacy, cybersecurity, internal audit, procurement, and business leadership.
  3. Build a risk-based approval process. High-risk AI use cases should require enhanced review, documentation, testing, and escalation.
  4. Address shadow AI directly. Do not assume employees are waiting for policy guidance. Identify actual use and bring it into governance.
  5. Train by role and risk. General AI awareness is not enough. Employees need practical rules for approved tools, prohibited data, human review, and reporting.
  6. Extend third-party risk management to AI. Vendor diligence, contracts, audit rights, monitoring, and renewal reviews should include AI-specific questions.
  7. Monitor and improve. AI governance is not a one-time policy exercise. It requires testing, metrics, incident review, and continuous improvement.

Board Questions

  1. Do we have an inventory of AI tools currently used across the enterprise?
  2. Who approves AI use cases, and how are high-risk uses escalated?
  3. How do we detect and manage shadow AI?
  4. What data is prohibited from being entered into AI tools?
  5. How are third-party AI vendors reviewed, contracted, monitored, and audited?
  6. What AI metrics does management provide to the board?
  7. Who has the authority to pause or terminate an AI project that creates unacceptable risk?

CCO Questions

  1. Is compliance involved before AI tools are deployed?
  2. Do our policies distinguish between approved, restricted, and prohibited uses of AI?
  3. Can we prove employees have been trained on AI risks?
  4. Do we have a documented AI risk assessment process?
  5. Are AI controls tested by internal audit or another independent function?
  6. Are AI incidents, errors, and misuse captured through speak-up and escalation systems?
  7. Can we show regulators that our AI governance works in practice?

Conclusion

Compliance Week 2026 confirmed that AI has crossed the threshold from emerging technology to core compliance risk. The companies that succeed will not be those that chase every new tool. They will be the companies that govern AI with discipline. For the modern CCO, this is the moment to step forward. AI governance belongs squarely within the compliance conversation because it involves risk, accountability, culture, controls, third parties, monitoring, and board oversight. Those are the foundations of effective compliance.

AI may change the tools. It does not change the obligation. Governance still matters. Controls still matter. Culture still matters. Accountability still matters. And compliance must help lead the way.

Categories
Sunday Book Review

Sunday Book Review: May 10, 2026, The Top Books on AI Governance Edition

In the Sunday Book Review, Tom Fox considers books that would interest compliance professionals, business executives, or anyone curious. It could be books about business, compliance, history, leadership, current events, or anything else that might interest Tom. In this episode, we look at 4 top books on AI governance.

  • AI Governance: Secure, Privacy-preserving, Ethical Systems by Engin Bozdag & Stefano Bennati (2026) 
  • Governing the Machine: How to Navigate the Risks of AI and Unlock Its True Potential by Ray Eitel-Porter, Paul Dongha, & Miriam Vogel (2025) 
  • A Short & Happy Guide to AI Governance and Regulation by Kashyap Kompella & James Cooper (2025) 
  • Mastering AI Governance: A Guide to Building Trustworthy and Transparent AI Systems by Rajendra Gangavarapu (2025)
Categories
Daily Compliance News

Daily Compliance News: May 8, 2026, The Unwinding the Sleaze Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • A Hungarian lesson in unwinding corruption.  (Bloomberg)
  • Gambling regulators are investigating Tech QB. (ESPN)
  • The ChatGPT-ification of businesses. (WSJ)
  • A top bank official in Ukraine was suspended for corruption. (KYIV Independent)

For more information on the use of AI in compliance programs, Tom Fox’s new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out Tom’s latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Betting the Game

Betting the Game: Inside Information: The New Edge in the Betting Economy

Betting the Game is a 10-part podcast series exploring how sports gambling reshaped the business, culture, and integrity of athletics across professional and amateur sports. Hosted by Tom Fox and Mike DeBernardis, the series examines the real-world collisions between betting markets, athlete conduct, institutional oversight, and public trust. Each episode examines a different pressure point, from player betting and college sports to prop bets, insider information, and governance failures that can put the credibility of competition at risk. At its core, the series asks a simple but urgent question: as gambling became mainstream in sports, did ethics, compliance, and oversight keep pace?

In episode 3 of Betting the Game, Tom and Mike examine one of the most important and least understood integrity risks in modern sports betting: inside information. The episode explores how injury updates, lineup changes, load management decisions, clubhouse knowledge, and trusted access to athletes can all become market-moving information in a legalized, mobile, real-time betting environment. Using examples from NFL injury-report enforcement, NBA late lineup disclosures, and baseball’s clubhouse ecosystem, including the Ohtani-Mizuhara matter, Tom and Mike explain why sports now face a governance challenge that increasingly resembles insider trading risk. At its core, this episode asks a simple but urgent question: who knows what, when do they know it, and what controls exist to prevent that information from being misused?

Key highlights:

  • Inside information is now an integrity issue, not just competitive intelligence.
  • NFL injury reports function like disclosure controls.
  • NBA load management creates real-time information asymmetry.
  • The risk extends far beyond players.
  • Sports needs a true compliance framework for market-sensitive information.

Resources:

Mike DeBernardis on LinkedIn

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
AI in Healthcare

AI in Healthcare: Five Healthcare AI Stories You Need to Know This Week – May 8, 2026

Welcome to AI in Healthcare in 5 Stories. This podcast is a Weekly Briefing of the five most important AI developments shaping healthcare, medicine, and life sciences. Each week, Tom Fox breaks down the latest stories on clinical innovation, regulation, privacy, compliance, patient safety, and operational transformation through a practical, business-focused lens. Designed for healthcare compliance professionals, executives, legal teams, clinicians, and industry leaders, the podcast moves beyond headlines to explain what each development means in the real world.

The top five stories for the week ending May 8, 2026, include:

  1. The bot impersonates a doctor, and the company is being sued. (HealthExec)
  2. AI and disparities in healthcare. (KFF)
  3. AI literacy in healthcare. (The Times Higher Education)
  4. AI in pharma development. (Contract Development)
  5. Doctors are recording your visits with AI. (WBUR)

For more information on the use of AI in Compliance programs, Tom Fox’s new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out Tom’s latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Pod and Port

Pod and Port: Podcasting, Social Media and Yacht Rock – From Vanity Metrics to Attribution: Creator Marketing Takeaways and a Yacht Rock Spotlight on Toto

In Pod & Port: Podcasting, Social Media and Yacht Rock, Tom Fox and Jeff Dwoskin explore a major shift in how creators, marketers, podcasters, and business owners should think about Instagram: it is no longer just a closed social platform. With stronger Google indexing, Instagram content can now have a much longer life cycle, which means captions, keywords, file names, and value-driven content matter more than ever.

Tom and Jeff discuss Jeff’s takeaways from Affiliate Summit West and Creator Economy Live in Las Vegas, focusing on the industry shift from influencer vanity metrics (likes, reach, impressions) to performance-driven creator marketing grounded in attribution, clicks, conversions, and revenue—an area Jeff ties to Stampede Social’s ability to capture intent signals and provide end-to-end tracking and real-time optimization. They address limits on demographic data from platforms and note that deeper demographics often require trust-based registration over time. Jeff explains that influence builds across multiple touchpoints, enabling analysis of engagement frequency and quality, while TikTok’s “watch, want, buy” reflects a collapsed funnel, enabled by relationships and trust, that can apply beyond consumer contexts. They also explore creators as infrastructure that builds community, illustrated by Fox’s Great Women in Compliance podcast, the resulting LinkedIn community, and the awards. The episode closes with a yacht rock discussion praising Toto’s elite musicianship and live performance.

Key takeaways:

  • From Vanity to Attribution
  • Real-Time Optimization Model
  • Influence Takes Touchpoints
  • Creators as Infrastructure
  • Building Community Example
  • Yacht Rock Spotlight: Toto

Resources:

Jeff

Jeff Dwoskin on LinkedIn

Stampede Social website

Toto on Spotify

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn