Categories
Daily Compliance News

Daily Compliance News: March 16, 2026 the Fighting Corruption ‘Not Worth It’ Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance brings to you compliance related stories to start your day. Sit back, enjoy a cup of morning coffee and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day we consider four stories from the business world, compliance, ethics, risk management, leadership or general interest for the compliance professional.

  • Rapper who fought against corruption set to be PM of Nepal. (CNN)
  • EDNY says fighting appeal of FIFA corruption case not worth the resources. (Reuters)
  • UBS settles long running whistleblower case. (Reuters)
  • Judge questions DOJ dropping Halkbank AML case. (Bloomberg)
Categories
FCPA Compliance Report

SDNY’s New Policy on Declinations

In this episode, Tom Fox welcomes back Hughes Hubbard partner Mike DeBernardis to discuss the Southern District of New York’s new corporate enforcement voluntary self-disclosure program for financial crimes and why SDNY leadership, including Jay Clayton, likely issued it: to encourage self-disclosure that saves enforcement resources and supports DOJ’s focus on individual accountability.

They compare the policy to (former) DOJ’s Corporate Enforcement Policy, highlighting notable distinctions such as SDNY’s narrower scope (financial/market integrity offenses) and a revised approach to aggravating factors that excludes common CEP considerations like seriousness, pervasiveness, and senior management involvement while carving out categories including foreign bribery and sanctions evasion, potentially reducing forum shopping. They also examine a “conditional declination” within two to three weeks, its implications for investigation speed and timeliness, and added pressure from whistleblower programs and compressed internal triage timelines.

Key Highlights

  • Why SDNY Issued It
  • SDNY Significance
  • Aggravating Factors Shift
  • Does It Move Needle
  • Conditional Declination Speed
  • Whistleblowers and Pressure

Resources

 Hughes Hubbard and Reed

Mike DeBernardis on LinkedIn

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com

Categories
AI Today in 5

AI Today in 5: March 16, 2026 the Who Owns the Decision Edition

Welcome to AI Today in 5, the newest edition to the Compliance Podcast Network. Each day, I will bring to you 5 stories about AI stories to start your day. Sit back, enjoy a cup of morning coffee and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day we consider four stories from the business world, compliance, ethics, risk management, leadership or general interest about AI.

  1. AI boosts brainstorming. (com)
  2. The AI Imperative. (WoutersKluwer)
  3. Who owns compliance decisions. (FinTechWeekly)
  4. AI opens new front on hospitals v. insurers battle. (Reuters)
  5. Embodied AI for manufacturing. (FinanceMagnates)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com

Categories
Blog

The GenAI Playbook for Compliance

There is a question I continue to hear from compliance professionals, boards, and senior executives alike: “When will generative AI finally be good enough for us to trust it?” As discussed by Bharat Anand and Andy Wu in their recent Harvard Business Review article The GenAI Playbook for Organizations they believe this is the wrong question.

The better question, and the one every Chief Compliance Officer should be asking right now, is this: “Where can we use GenAI effectively today, with the right controls, to make our compliance program more efficient, more resilient, and more business relevant?” This is their core insight and they argue that leaders should stop obsessing over whether GenAI is perfect and instead focus on where it can create value now and how strategy, not speed alone, wins.

For the compliance profession, that insight lands with particular force. We are not in the business of chasing shiny objects. We are in the business of managing risk, enabling growth, and preserving trust. GenAI is not a parlor trick. It is becoming an operating reality. The question is no longer whether compliance should engage. The question is whether compliance will lead with discipline or lag behind while the business adopts AI without it.

Stop Asking Whether AI Is Smart. Start Asking Where Errors Matter.

One of the most useful contributions from the article is its simple but powerful framework: evaluate GenAI use cases through two lenses. First, what is the cost of error? Second, does the task rely primarily on explicit data or on tacit human judgment? That is gold for compliance.

Too many organizations still evaluate AI in sweeping, binary terms. Either they think it is magical or they think it is too dangerous to touch. Neither position is helpful. Compliance officers need a more operational lens. We need to break work into tasks and then ask where automation is appropriate, where human oversight is essential, and where human judgment must remain firmly in control. In my view, that is exactly how mature compliance programs should approach GenAI. Not with ideology. With risk assessment.

The “No Regrets” Zone for Compliance

The article identifies a “no regrets” zone: low cost of error, explicit knowledge, and high potential for immediate deployment. Examples include summarizing documents, screening resumes, or handling routine inquiries. In compliance, this is where many early wins live.

Think about policy summarization, training-content adaptation, meeting-note extraction, initial hotline trend coding, third-party questionnaire triage, basic control documentation, and first-draft responses to routine business questions. None of these tasks should be delegated blindly. But many can be accelerated responsibly.

For instance, a compliance team buried under requests from procurement, HR, sales, and legal can use GenAI to produce first-pass summaries of policies, draft FAQs, organize issue logs, and identify recurring themes from employee questions. That does not replace the compliance professional. It frees that professional to do what matters more: judgment, influence, escalation, and strategic problem-solving.

This is where I think many compliance teams have been and continue to be too timid. They have waited for perfection in a space where perfection was never the benchmark. The benchmark should be whether the tool improves speed, lowers administrative friction, and allows compliance personnel to move up the value chain.

The “Quality Control” Zone Is the Compliance Sweet Spot

The article also identifies a “quality control” zone, where the knowledge is explicit but the cost of error is high. In those cases, GenAI can do substantial work, but humans must verify, review, and retain accountability. The authors cite legal drafting, software development, and financial due diligence as examples. That is the very heartland of compliance.

Consider sanctions screening narratives, third-party due diligence memos, internal investigation chronologies, risk assessment documentation, compliance testing workpapers, and board reporting drafts. These are exactly the kinds of tasks where GenAI can accelerate the heavy lifting but should never be the final word.

This is also where compliance can bring discipline to the rest of the enterprise. The business may want speed. Compliance must insist on verified speed.  A practical model is straightforward: (1)
GenAI drafts  Humans review  Controls document  Leaders own.

That is not anti-innovation. That is responsible innovation. It is also consistent with what regulators increasingly expect: not the absence of AI, but governance around its use. Whether one looks to the DOJ’s emphasis on effective controls and continuous improvement in the Evaluation of Corporate Compliance Programs, the NIST AI Risk Management Framework, or the growing global focus on AI governance, the message is the same. If your company uses AI in a consequential process, you had better know where it is being used, who is checking it, what data is feeding it, and how errors are being caught.

The “Human-First” Zone Must Stay Human

The article is particularly strong in its warning about tasks that require tacit knowledge and carry a high cost of error: strategy, sensitive personnel decisions, crisis leadership, and other matters where judgment, ethics, and context are central. In those cases, GenAI may support, but it should not decide. Compliance professionals should print that out and tape it to the wall.

Some activities must remain human-led. Decisions about discipline, executive accountability, remediation after a serious investigation, disclosure strategy, culture assessment, or whether a business relationship “feels wrong” despite facially acceptable paperwork are not suitable for AI-driven decision-making. They require experience, intuition, moral clarity, and often courage.

That does not mean AI has no role. It can assemble facts, surface patterns, propose draft communications, and model possible outcomes. But it cannot own the judgment. In a compliance function, the more consequential the decision, the more important it is that a human being stands behind it. That is not nostalgia. That is governance.

Broad Access Without Chaos

One of the article’s more provocative arguments is that organizations should mandate broad access to GenAI tools, because value creation begins when employees can experiment and discover useful applications. At the same time, the authors warn against bottlenecks that leave innovation trapped behind slow approval processes. I agree with the spirit of that point, but from a compliance perspective there must be an important qualifier: broad access does not mean unmanaged access. This is where the compliance function can truly be a business enabler. Compliance should not be the department of “no AI.” It should be the department of “safe AI at scale.” That means several things.

  1. Build a risk-based use policy for GenAI. Employees need clear guidance on prohibited uses, approved tools, escalation triggers, and data-handling requirements.
  2. Classify use cases. Not every AI use case deserves the same scrutiny. A tool helping draft a training outline is not the same as a tool helping assess third-party bribery risk.
  3. Establish review protocols. High-risk outputs require human validation, documented sign-off, and in some cases legal or compliance approval.
  4. Train broadly and repeatedly. AI governance cannot live in a PDF on an intranet site. It has to be operationalized through real examples and practical scenarios.
  5. Monitor and improve. If GenAI is being used across the enterprise, compliance should have visibility into where, how, and with what effect.

That is what a mature AI governance program looks like. It is also the same risk management protocol that every compliance professional uses on a daily basis.

Data Is the Real Compliance Story

Another important insight from the article is that competitive advantage will come not merely from adopting GenAI but from pairing it with proprietary data, redesigned workflows, and complementary organizational assets. The authors emphasize centralizing data, identifying what data is not yet being collected, and redesigning the organization around AI-enabled learning loops. For compliance, this should be a wake-up call.

Most compliance functions are sitting on a treasure trove of underused data: hotline reports, training metrics, policy attestations, third-party files, gifts and entertainment data, investigation outcomes, audit findings, HR trends, distributor analytics, and culture survey results. Yet in many companies, that information remains fragmented across systems and functions.

If compliance wants to be strategic in the AI era, it has to get serious about data architecture. Not simply for reporting, but for insight. The future compliance advantage will go to organizations that can connect signals across functions and convert them into earlier detection, smarter resource allocation, and more tailored interventions. In other words, the future of compliance is not just controls. It is controls plus intelligence.

Three Questions Every CCO Should Ask This Week

So where does this leave the compliance officer trying to lead in real time? I would suggest three immediate questions. First, which compliance tasks are in the “no regrets” zone and should be piloted now? Second, which tasks sit in the “quality control” zone and require a formal human-in-the-loop process? Third, which decisions are so consequential, contextual, or values-laden that they must remain unmistakably human-first?

If you cannot answer those questions, your company does not yet have a GenAI strategy for compliance. It has experimentation without governance or caution without direction. Neither is sustainable.

The GenAI era will not reward the fastest organization. It will reward the organization that best aligns technology, governance, data, and human judgment. That is the compliance challenge. It is also the compliance opportunity. Compliance has always been about more than preventing misconduct. At its best, it helps a company make better decisions, allocate trust wisely, and compete with integrity. GenAI does not change that mission. It sharpens it. The playbook is here. The real question is whether compliance will run it.