Categories
All Things Investigations

ATI In-House Insights: Cultivating a Speak Up Culture: Whistleblower Management Insights with Maria Buccieri and Ashley Smith

Welcome to the Hughes Hubbard Anti-Corruption & Internal Investigations Practice Group’s podcast, All Things Investigation. This is a special series featuring sights from in-house practitioners, hosted by Mike DeBernardis. In this podcast, Mike visits with Maria Buccieri and Ashley Smith, Deputy General Counsel at Amtrak, about Encouraging and Managing Whistleblowers.

Ashley and Maria, both compliance and legal leaders from Amtrak, discuss how to encourage and manage whistleblowers as a core element of an effective compliance program, emphasizing that a lack of reports does not indicate a healthy organization. They describe a “speak-up” culture as one where employees feel heard, senior leaders model speaking up, and reporting is accessible across a diverse workforce through multiple channels (phone, email, QR codes, mobile tools, in-person availability) and languages. Key barriers include fear of retaliation (often through subtle workplace ostracism), disappointment when nothing happens, and loss of anonymity. They outline best practices for handling reports consistently with other serious complaints, preserving confidentiality “as much as possible,” training mid-level managers and investigators, and maintaining communication with reporters during lengthy investigations. They also caution against dismissing “serial reporters,” recommending contextual analysis and internal process checks.

Key highlights:

  • Healthy Speak Up Culture
  • Why Employees Stay Silent
  • Handling Reports Fairly
  • Protecting Confidentiality
  • Keeping Reporters Updated
  • Serial Reporters and Sparse Tips

Resources:

Hughes Hubbard & Reed Website

Mike DeBernardis

Maria Buccieri on LinkedIn

Ashley Smith on LinkedIn

Categories
AI Today in 5

AI Today in 5: April 15, 2026, The Tax Day Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. What does the US National AI framework mean for healthcare? (JD Supra)
  2. AI for investigative impact. (FinTechGlobal)
  3. FinTech bets big on AI agents. (IBS Intelligence)
  4. Oracle debuts AI agents for banking. (PYMNTS)
  5. BOE urges regulators to assess cyber AI risks. (Bloomberg)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Compliance Into the Weeds

Compliance into the Weeds: Surveying Retaliation Against Compliance Officers

The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to explore it more fully. Looking for some hard-hitting insights on compliance? Look no further than Compliance into the Weeds! In this episode of Compliance into the Weeds, Tom Fox and Matt Kelly discuss a new anonymous Radical Compliance survey, launched with Case IQ and Compliance Week, to quantify retaliation against compliance officers who raise compliance concerns to senior management.

The survey asks what misconduct was reported, who retaliated, what forms of retaliation occurred, such as firing, demotion, harassment, budget cuts, blacklisting, and what actions followed. Matt also encourages responses from those who have not experienced retaliation. Tom and Matt have previously discussed anecdotally but have not systematically studied, and plan to publish their findings and host a webinar later in the spring, likely in June. They also discuss potential structural protections informed by data, such as disclosure expectations around CCO departures (e.g., 8-K concepts) and contract/regulatory-approval models like those in India’s banking sector, and suggest that the findings could inform DOJ views on compliance autonomy and effective compliance programs.

Key highlights:

  • Survey Launch Explained
  • Retaliation Questions
  • Why This Study Matters
  • Defining Prevalence
  • Using Findings for Change
  • Final Call to Participate

Resources:

Matt on Radical Compliance

Survey on Retaliation Against Compliance Professionals

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

A multi-award-winning podcast, Compliance into the Weeds was most recently honored as one of the Top 25 Regulatory Compliance Podcasts, a Top 10 Business Law Podcast, and a Top 12 Risk Management Podcast. Compliance into the Weeds has been conferred a Davey, a Communicator Award, and a W3 Award, all for podcast excellence.

Categories
Daily Compliance News

Daily Compliance News: April 15, 2026, The Decoupling Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Great Women in Compliance

Great Women in Compliance: Clarity, Confidence, Results: Women Over 50 at Work

In this episode, Sarah Hadden and Caveni Wong explore the unique strengths women over 50 bring to today’s workplace—and why those strengths are often overlooked.

Drawing on a career that spans consulting, sales, and ethics & compliance leadership, Caveni reflects on the power of experience, the value of judgment and relationship-building, and the kind of leadership that doesn’t rely on title or authority. They talk candidly about nonlinear career paths and what it means to reach a stage where you can choose what’s next with clarity and confidence.

Along the way, they find an unexpected metaphor in sourdough bread—patient, resilient, and built over time—much like the careers and capabilities we develop across decades.

Categories
Blog

When AI Becomes Evidence of Bad Governance: What CCOs and Boards Can Learn from Fortis Advisors

The Delaware Court of Chancery has handed compliance leaders and boards a timely lesson: generative AI is not a substitute for judgment, legal discipline, or governance. When leaders use AI to validate a predetermined objective, the technology does not reduce risk. It can become powerful evidence of intent, bad faith, and control failure.

A Cautionary Tale for Corporate Leaders

The recent Delaware Court of Chancery decision in Fortis Advisors, LLC v. Krafton, Inc. should be read by every Chief Compliance Officer (CCO), board member, general counsel, and corporate deal professional. The article describing the decision recounts a dispute in which a buyer, apparently unhappy with a substantial earnout obligation, turned to ChatGPT for advice on how to escape the economic consequences of the deal. According to the court’s account, the buyer then executed an AI-generated strategy designed to renegotiate the arrangement or take control from the seller management team. The court ultimately found that the buyer had wrongfully terminated key employees, improperly seized operational control, reinstated the seller’s CEO, and extended the earnout window to restore a genuine opportunity to achieve the payout.

The Real Compliance Lesson

For compliance professionals, the most important lesson is not that AI is dangerous. The lesson is that leadership can use AI in dangerous ways when governance is absent. That is a far more important point.

Too many organizations still approach AI governance as a technology problem. They focus on model performance, cybersecurity, or procurement review. Those are important issues, but this case reminds us that AI governance begins with human purpose. What question was asked? What objective was embedded in the prompt? What controls existed before action was taken? Who challenged the proposed course of conduct? Who documented the legal and ethical analysis? Those are compliance questions. Those are board questions.

Viewing the Case Through the DOJ ECCP Lens

This is also where the DOJ’s Evaluation of Corporate Compliance Programs (ECCP) provides a useful lens. The ECCP asks whether a company’s program is well designed, adequately resourced, empowered to function effectively, and actually works in practice. Put that framework over this fact pattern, and the governance gaps become painfully clear. Was there a control around the use of generative AI in strategic or legal decision-making? Was there escalation to legal, compliance, or the board when a significant earnout exposure was at stake? Was there any meaningful challenge function, or did leadership use AI as a convenient amplifier for a business objective it had already chosen?

The case suggests the latter. That should concern every board. Generative AI can be useful in brainstorming, summarizing, and scenario testing. But when executives use it to reinforce a desired outcome, particularly one touching contractual obligations, employment decisions, or post-closing governance rights, the tool can become a mechanism for rationalizing misconduct.

When AI Chats Become Discoverable Evidence

Worse, it creates a record. The Court notes that the AI chats were not privileged, were discoverable, and vividly underscored the buyer’s efforts to avoid its legal obligations. That point alone should stop corporate leaders in their tracks.

Many executives still treat AI chats as an informal thinking space, almost like talking to themselves. That is a serious mistake. Prompt histories, outputs, internal forwarding, and downstream use can all become evidence. If employees use public or enterprise AI tools to explore termination strategies, dispute positions, or ways around contractual commitments, they may be creating exactly the documentary record that plaintiffs, regulators, and judges will later find most compelling. In other words, the issue is not simply data leakage. It is discoverability, privilege erosion, and self-generated evidence of intent.

That is why CCOs and boards need to move beyond generic AI-use policies and build governance around high-risk use cases. The question should not be, “Do we allow ChatGPT?” The question should be, “Under what circumstances can generative AI be used in decisions involving legal rights, employee discipline, regulatory exposure, strategic transactions, or board-level matters?” If the answer is unclear, the company has work to do.

The M&A and Earnout Governance Lesson

The dealmaking lesson here is equally important. Earnouts are already fertile ground for post-closing disputes because they sit at the intersection of incentives, control, and timing. Buyers often want flexibility. Sellers want protection from interference. This case illustrates what can happen when a buyer attempts to manipulate operations in a way that affects the achievement of the earnout. The court not only found wrongful interference but also equitably extended the earnout period by 258 days and preserved a further contractual right to extend, thereby materially altering the deal’s economic landscape.

That is a governance lesson hiding inside an M&A lesson. Once a company acquires a business with earnout rights and operational covenants, post-closing conduct is no longer just integration management. It is compliance management. Interference with operational control, pretextual terminations, or actions designed to suppress performance metrics can lead to litigation, destroy value, and trigger judicial remedies that boards did not expect. CCOs should therefore insist that M&A integration playbooks include compliance review of earnout governance, decision rights, escalation protocols, and documentation standards.

Five Lessons for Boards and CCOs

What should boards and compliance officers do now? Here are five lessons.

  1. Govern the objective before you govern the tool. AI is only as sound as the purpose for which it is deployed. If leadership starts with a bad objective, AI can scale the problem. Boards should require management to define prohibited uses of AI in areas such as contract avoidance, pretextual employee actions, retaliation, and legal strategy without oversight by counsel.
  2. Treat high-risk AI prompts and outputs as governed business records. If a prompt relates to litigation, terminations, regulatory response, deal rights, or board matters, it should fall within clear policies on retention, review, and escalation. Employees need to understand that AI interactions may be discoverable and may not be privileged.
  3. Embed legal and compliance into consequential AI use cases. The ECCP emphasizes whether compliance has stature, access, and authority. That principle applies directly here. Strategic uses of AI that touch contractual rights, employment decisions, or fiduciary issues should not proceed without legal and compliance review.
  4. Build AI governance into M&A and post-closing integration. Earnout structures, operational covenants, and seller management rights are precisely the areas where incentives can distort behavior. Boards should ask whether integration teams have controls preventing actions that could be viewed as interference, manipulation, or bad-faith conduct.
  5. Document challenge, not just action. A single final decision does not prove good governance. It is proved by the process surrounding it. Was there dissent? Was there an analysis? Was there an escalation memo? Was there a documented rationale grounded in law, contract, and fiduciary duty? If not, the company may be left with a record that tells the wrong story.

Governance Must Come Before AI

In the end, this case is not really about a video game company. It is about a governance failure dressed in modern technology. Leaders appear to have used AI not to improve judgment, but to reinforce a course of conduct they already wanted to pursue. That is the compliance lesson. AI does not remove the need for fiduciary discipline, legal oversight, or ethical restraint. It makes those requirements more urgent.

For boards and CCOs, the mandate is clear. Governance must come first. Because when AI is used without guardrails, it does not merely create risk; it creates it. It can become the evidence.