Categories
AI Today in 5

AI Today in 5: March 18, 2026, The AI Compliance in GCC Companies Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Data privacy and compliance in the age of AI. (BloombergLaw)
  2. Agentic AI in insurance claims. (FinTechGlobal)
  3. Leading through AI transformation in FinTech. (Forbes)
  4. AI compliance in GCC organizations. (BizToday)
  5. Does healthcare need specialized AI? (HBR)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Compliance Into the Weeds

Compliance into the Weeds: McKinsey’s Lilli AI Hack: What It Signals for AI Governance, Security and Disclosure

The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to explore it more fully. Looking for some hard-hitting insights on compliance? Look no further than Compliance into the Weeds! In this episode of Compliance into the Weeds, Tom Fox and Matt Kelly look at the recent hack of McKinsey’s AI tool Lilli.

Tom and Matt discuss a Financial Times report that a white-hat hacker, Paul Price of one-person firm Code Wall, exploited flaws in McKinsey’s internal AI tool “Lilli” to access millions of internal chat messages, view sensitive client-related file names, and see the model weights used to train the system; McKinsey patched the vulnerabilities after disclosure. They argue that the incident highlights emerging AI risks beyond traditional cybersecurity, including AI agents autonomously scouting for targets, the possibility of attackers altering models to alter outputs and create hard-to-detect “drift,” and confusion over who within organizations owns AI security and governance. The episode also explores the messy, inconsistent landscape of disclosure for AI-related incidents and urges compliance and GRC leaders to slow AI adoption, pressure-test systems, clarify accountability, ensure kill-switch/manual fallback capabilities, and consider reputational fallout.

Key highlights:

  • McKinsey AI Hack Overview
  • Three Big Implications
  • Model Drift and Tampering
  • GRC Playbook for AI Risk
  • Accountability and Kill Switches

Resources:

Matt in Radical Compliance

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

A multi-award-winning podcast, Compliance into the Weeds was most recently honored as one of the Top 25 Regulatory Compliance Podcasts, a Top 10 Business Law Podcast, and a Top 12 Risk Management Podcast. Compliance into the Weeds has been conferred a Davey, a Communicator Award, and a W3 Award, all for podcast excellence.

Categories
Daily Compliance News

Daily Compliance News: March 18, 2026, The Corruption or Misunderstanding Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Closing statements in FirstEnergy corruption trial. (Ohio Capital Journal)
  • US banks are entangled with private credit. (WSJ)
  • Crispin Odey is accused of falsifying meeting minutes and lying to the FCA. (FT)
  • AZ files criminal charges against Kalshi. (NYT)
Categories
Blog

AI Is Only as Good as the Data: What Compliance Leaders Need to Know About Data Readiness

There is an old lesson in compliance that remains evergreen: bad facts produce bad decisions. The same is true for data science: Garbage In, Garbage Out (GIGO). In the GenAI era, that lesson has a new twist. Bad data produces bad outputs at machine speed.

That is why the report, Taming the Complexity of AI Data Readiness, deserves the attention of every Chief Compliance Officer, compliance technologist, and board member who asks management, “What is our AI strategy?” The better follow-up question is, “What is our data readiness strategy?” Because the report makes one point with unmistakable clarity: the model is not the mission; the data foundation is.

For compliance professionals, this is not a technical side issue. It is central to the enterprise risk conversation. If your organization is training, testing, or deploying AI on messy, siloed, biased, stale, or poorly governed data, you are not building a competitive advantage. You are an industrializing risk.

The Dirty Little Secret of Enterprise AI

The report lays out a reality that will not surprise anyone who has lived through a data initiative. Most organizations are not ready. Only 7% of survey respondents said their company’s data was completely ready for AI adoption. By contrast, 51% said it was only somewhat ready, while 27% said it was not very or not at all ready. Only 42% said their organization had high trust in its AI data, and 73% agreed their company should prioritize AI data quality more than it currently does. That should give every compliance officer pause.

We are living through a corporate rush toward GenAI, yet most companies are still stuck at the same old starting line: fragmented, inconsistent, poorly governed data. Many AI conversations inside companies still begin with use cases, copilots, and vendor demos. Far fewer begin with data lineage, data permissions, data quality, or governance maturity. That is a mistake.

If the underlying data is unreliable, the downstream output will be unreliable as well. Worse, it may arrive dressed up in polished prose, persuasive charts, or tidy summaries that create a false sense of confidence. In compliance with that, it is especially dangerous. Whether the use case is sanctions screening, due diligence, internal investigations, policy management, financial controls, or regulatory reporting, a bad answer delivered quickly is still a bad answer.

Bad Data Is Not Just a Tech Problem

One of the most useful parts of the report is how it frames the core barriers. The top challenge cited by respondents was siloed data and difficulty integrating sources at 56%. After that, a lack of a clear data strategy ranked 44%, and data quality or bias issues ranked 41%. Other concerns included regulatory constraints on data use, unclear data lineage, inadequate security, and outdated data. Every one of those should sound familiar to compliance professionals.

Siloed data means incomplete visibility. Weak lineage means you may not be able to defend how an answer was generated. Bias in the data means distorted outputs. Outdated data means inaccurate decisions. Weak security exposes sensitive information. Regulatory constraints mean the company may not even have the right to use certain data the way its AI aspirations assume.

The report underscores this point. 52% of respondents identified inaccurate or biased AI results as a top concern, while 40% cited the loss of security or intellectual property. That is not abstract. That is the modern compliance risk register.

Can We Trust the Data?

A quote from Teresa Tung of Accenture in the report is worth lingering over. She said data readiness means “you can access data to see an accurate view of what is happening in your business and what you can do about it.” That is also a very good working definition of compliance intelligence.

A mature compliance program helps a company understand what is happening inside the business and what should be done in response. That means your hotline data, your gifts and entertainment data, your training metrics, your third-party files, your investigation records, and your control data all need to mean what you think they mean.

The report makes this point with a simple example. Price data is not useful unless you know whether it is in U.S. or Australian dollars, whether it is a unit or bulk price, and when it applies. The compliance equivalent is easy to imagine. A third-party risk flag is not useful unless you know what triggered it, what jurisdiction it covers, how recently it was refreshed, what source produced it, and whether anyone validated it. Context is a control. Without it, data can mislead just as easily as it can inform.

Why This Is Becoming a Board-Level Issue

Another important finding is that only 23% of organizations have created a data strategy for AI adoption, although 53% are currently developing one. In other words, companies know they have a problem, but most are still working through it. This is where compliance can truly function as a business enabler.

The best compliance leaders know that governance is not the enemy of innovation. Governance is what makes innovation scalable and sustainable. If the business wants to use AI at scale, compliance should request a documented AI data strategy that addresses security, privacy, data quality, governance, accessibility, bias management, and alignment with business objectives.

The report found that security and protection of sensitive data were the most critical elements of such plans, at 59%, followed by clean, usable data quality at 46% and data governance at 41%. That is not just an IT checklist. That is a board conversation.

Bring AI to the Data

The report also discusses a concept compliance professionals need to understand: data gravity. Large and sensitive data sets tend to stay where they are because moving them is costly, slow, and risky. Increasingly, organizations are turning to architectures that bring AI processing to the data rather than moving data to the model. The report highlights approaches, such as zero-copy access and containerized applications, that can reduce latency, control costs, and address security and sovereignty concerns. This matters greatly for compliance.

Many regulated environments cannot simply move sensitive data across systems or borders because a vendor wants a cleaner AI workflow. Privacy laws, localization rules, contracts, and plain good judgment all cut against that approach. If AI can be brought to the data rather than copying data into multiple new environments, the organization may reduce both operational and compliance risk.

Compliance officers do not need to become cloud architects. But they do need to ask the right questions. Are we duplicating sensitive data unnecessarily? Are we crossing jurisdictional lines? Can we explain lineage, access, and security? Are we creating an AI environment that looks controlled or improvised?

Agentic AI: Real Promise, Real Risk

The report is optimistic about the potential of agentic AI for data management. 47% of respondents said their organizations believe agentic AI can solve data quality issues, and 65% expect many business processes to be augmented or replaced by agentic AI over the next 2 years. Experts cited benefits such as mapping data, documenting it, performing quality checks, monitoring drift, and automating routine tasks that previously required significant manual effort.

There is real promise here. Compliance teams spend far too much time on manual work that adds little strategic value. Tools that can responsibly automate mapping, documentation, testing, triage, or drift monitoring deserve serious attention.

But this is no place for magical thinking. The report is equally clear that success requires the right team: data engineers, domain experts, prompt expertise, and a product owner aligned to a business objective. That is the lesson. Agentic AI does not eliminate the need for governance. It raises the stakes for governance. If you automate poor judgment on top of poor data, you do not get efficiency. You get scalable failure.

Five Questions for Every CCO

So what should compliance leaders do now? Start with five questions.

  1. Which AI use cases in our company depend on sensitive, regulated, or high-risk data?
  2. Can we explain the lineage, quality, freshness, permissions, and context of that data?
  3. Do we have a documented AI data strategy, or are we confusing pilots with governance?
  4. Are we moving data in ways that create avoidable privacy, security, or sovereignty risks?
  5. Who owns the meaning of the data?

That final question may be the most important. The report stresses that the business must own the data so it is described properly and used correctly. Data is not just a technical asset. It is a business asset with legal, ethical, and operational meaning. Compliance should insist that meaning be defined before AI starts drawing inferences from it.

The Bottom Line

The great temptation in the AI era is to focus on the model’s brilliance. The wiser course is to focus on the data’s readiness. That is where trust begins. That is where defensibility begins. And that is where sustainable value begins. For compliance professionals, the message is plain. AI governance that ignores data readiness is not governance at all. It is wishful thinking with a dashboard.

The organizations that win with AI will not simply have more tools. They will have better data, better lineage, better controls, better discipline, and better judgment about when and how to use AI. In compliance, that is not glamorous. But it is where real success usually lives.

Categories
AI Today in 5

AI Today in 5: March 17, 2026, The $1tn in Value Wipe-Out Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Open Claw exposes Agentic AI risk. (GovInfoSecurity)
  2. Using AI for comms compliance. (FinTechGlobal)
  3. AI with integrity in FinCrime compliance. (Forbes)
  4. AI in life insurance. (InsuranceNewsNet)
  5. Amazon leads the $1tn wipeout in AI value. (CNBC)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Daily Compliance News

Daily Compliance News: March 17, 2026, Is the DOJ Corrupt? Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Cyber hacks and Iran. (WSJ)
  • Madagascar’s ABC chief appointed PM. (DM.COM)
  • BoA settles Epstein victims’ lawsuit. (FT)
  • Was there corruption involved in the Live Nation settlement? (BIG)
Categories
Innovation in Compliance

Innovation in Compliance: Venezuela’s Energy Reopening with Loren Steffy

Innovation comes in many areas, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, host Tom Fox visits with energy journalist/publisher Loren Steffy to discuss whether a Trump administration announcement regarding Venezuela is meaningful for oil markets, concluding that it mainly increases uncertainty and is unlikely to drive major U.S. oil-company investment.

They note West Texas shale generally needs about $60 oil to break even, making $50 oil politically and economically problematic. They explain that Venezuela’s heavy crude requires specialized extraction technology and extensive, aging infrastructure upgrades to reach the market, potentially costing billions and taking decades, with some estimates placing Venezuela’s break-even price at $80 or higher. They emphasize governance, corruption, degraded PDVSA human capital, contract enforceability, and unresolved debts (including reported $12B owed to ConocoPhillips) as key barriers, making Venezuela “uninvestible” for most majors and suggesting only high-risk players might consider entry amid unclear U.S. strategy.

Key highlights:

  • Venezuela Heavy Crude Basics
  • Infrastructure Rebuild Challenge
  • Human Capital and Governance
  • Old Debts and Legal Risk
  • Government Plan or Subsidies

Resources:

Loren Steffy on LinkedIn

Stoney Creek Publishing 

Innovation in Compliance was recently ranked Number 4 in Risk Management by 1,000,000 Podcasts.

Categories
The PfBCon Podcast

The PFBCon Podcast: Legal Must-Knows for Business Podcasters: Protect Your Brand, Content & Reputation with Gordon Firemark

Entertainment and media attorney Gordon Firemark (“the podcast lawyer”) delivers a session on essential legal principles for podcasters using shows as a business or within a business.

Gordon explains that publishing a podcast makes you responsible like a professional media company and outlines key areas to manage risk and build long-term value: forming a legal entity to separate personal and business liability; documenting ownership with written agreements (including “work made for hire” language) for co-hosts, contractors, and contributors; using a “podcast prenup” to define control, revenue, expenses, and exit scenarios; protecting intellectual property through copyright registration and trademark selection/registration (including searching the USPTO and avoiding generic titles), illustrated by a case where waiting to file caused years of trademark conflict; avoiding copyright problems by licensing/using royalty-free content and not relying on fair use as a “get out of jail free” claim; requiring guest release agreements to prevent takedown demands and disputes, including clauses covering editing, repurposing, and AI use; structuring sponsorship and brand deals with clear payment terms and deliverables; complying with FTC disclosure rules for endorsements, affiliate relationships, gifts, and paid interviews; and reducing defamation/privacy risk through fact-checking, respecting NDAs, and using disclaimers for legal/health/financial advice. He closes with resources and where to find him online, including his sites and podcaster community.

Key highlights:

  • Why Podcasters Need Legal Thinking
  • Gordon’s Podcasting Origin Story
  • The Three Pillars of Protection
  • Entities and Ownership Basics
  • Co-Hosts and Podcast Prenup
  • Copyright and Fair Use Myths
  • Trademarks and Naming Your Show
  • Guest Releases and Control
  • Sponsorships and FTC Disclosures
  • Defamation, Privacy, and Disclaimers 

Resources:

Follow Gordon Firemark on:

Entertainment Law Offices of Gordon P. Firemark

LinkedIn

Website

YouTube

Facebook

Instagram

Categories
Blog

COSO Meets GenAI: The Internal Controls Playbook for Compliance

If you are a compliance professional looking at your company’s GenAI rollout and wondering when the grown-ups will finally arrive, I have good news. They just did.

COSO has now stepped directly into the GenAI conversation with its new paper, Achieving Effective Internal Control Over Generative AI, and that matters a great deal. For those of us in compliance, internal audit, risk, and governance, COSO is not a shiny new acronym trying to catch the latest tech train. COSO is the train schedule. It is the framework that boards, auditors, controllers, and compliance professionals already understand. And with this publication, COSO has done something very important: it has translated GenAI risk into the language of internal control. That is exactly what the market needed.

Because up until now, too much of the GenAI discussion has lived in one of two places. Either it sat in the innovation lab with people talking breathlessly about transformation, or it sat in the legal department where everyone worried, quite correctly, about hallucinations, privacy, and bias. What has often been missing is the operational bridge between aspiration and assurance. COSO gives us that bridge. It says, in effect, GenAI is not outside your control environment. It is now part of it. And if it is part of it, then it must be governed, tested, monitored, and documented like any other significant business capability.

GenAI Does Not Change the Need for Control. It Changes the Terrain

One of the most important points in the COSO paper is that GenAI does not upend the COSO Internal Control-Integrated Framework. Rather, it changes the environment in which those controls operate. The five familiar COSO components remain the same: control environment, risk assessment, control activities, information and communication, and monitoring activities. What changes is the nature of the underlying risk. GenAI introduces probabilistic outputs, model drift, prompt injection, opaque reasoning, rapid configuration changes, and the adoption of shadow AI outside normal approval channels. That is a very useful framing for compliance officers.

It means we should stop treating AI governance as some exotic side project. If GenAI is used in operations, legal, finance, HR, procurement, investigations, or reporting, it belongs within your existing governance architecture. You do not need to invent a new religion. You need to apply the old disciplines to a new set of facts.

This is where compliance can and should lead. We understand what it means to build controls around fast-moving risk. We understand escalation, role clarity, training, monitoring, and accountability. COSO is effectively telling compliance professionals, “You already know more about governing GenAI than you think. Now apply that muscle memory with precision.”

A Capability-First Approach Is a Game Changer

The most practically useful innovation in the COSO guidance is its capability-first taxonomy. Rather than organizing AI controls by vendor, product name, or technical buzzwords, COSO focuses on what the GenAI system actually does. It identifies eight capability types: data extraction and ingestion; data transformation and integration; automated transaction processing and reconciliation; workflow orchestration; judgment, forecasting, and insight generation; AI-powered monitoring and continuous review; knowledge retrieval and summarization; and human-AI collaboration. That is enormously helpful because it is how compliance people actually work.

We do not manage risk by admiring the label on the software box. We manage risk by understanding what a tool does in a process, what can go wrong, how fast it can go wrong, and how the error propagates downstream. A GenAI tool that summarizes policies creates one set of risks. A GenAI agent that routes approvals, posts transactions, or helps shape regulatory disclosures creates another. COSO provides organizations with a common language for distinguishing among use cases and calibrating controls accordingly. That is not just elegant. It is actionable.

The Five Foundational Truths Every CCO Should Memorize

COSO also offers five foundational characteristics for GenAI internal control, and each should be printed and posted on the wall of every compliance office.

First, GenAI is probabilistic, not deterministic. In plain English, it can sound authoritative and still be wrong. Therefore, outputs must be treated as claims requiring validation, not facts to be accepted by default. Second, GenAI is dynamic. Models, prompts, and retrieval data evolve quickly, so controls and monitoring must keep pace. Third, GenAI is easily scalable, meaning it can scale both productivity and error rates. Fourth, it has a low barrier to entry, which is why shadow AI is such a real problem. Fifth, and perhaps most interestingly, GenAI can help govern GenAI through pattern detection, validation, and monitoring.

There is a lot packed into those five points. For compliance, the biggest takeaway is this: static governance will fail in a dynamic AI environment. Annual reviews will not cut it. A once-a-year policy refresh will not cut it. A single training session on acceptable use will not cut it. GenAI governance has to be living governance.

What COSO Says About the Control Environment

COSO starts where it should: tone, structure, and accountability. The paper says organizations need a GenAI acceptable use policy, clear ethical boundaries, oversight and accountability responsibilities, named owners for each AI tool or platform, role-based training, and accountability mechanisms tied not only to adoption but also to safety, compliance, and performance. Boards and cross-functional oversight groups need visibility into adoption, incidents, changes, and risk indicators.

That is a direct message to compliance leaders. If nobody owns the prompts, the retrieval connectors, the model configurations, the escalation path, or the approval structure, then nobody owns the risk. And in a regulatory environment moving steadily toward AI accountability, “nobody owned it” is not a defense. It is an indictment.

I particularly liked COSO’s emphasis that prompts, system prompts, and retrieval connectors should be treated as governed configurations. That is exactly right. Too many companies still treat prompting like an informal user habit rather than a control-relevant configuration choice. In a high-impact context, the prompt is not casual. It is part of the system.

Risk Assessment Must Get More Dynamic

COSO’s discussion of risk assessment is equally strong. It calls for use cases to have clearly defined objectives, acceptable and unacceptable boundaries, and success criteria. It also warns that organizations must first ask whether GenAI is even the right tool for the task. In some cases, traditional automation or deterministic systems may be safer and more reliable. The risk assessment should account for hallucinations, drift, provenance gaps, prompt injection, bias, third-party dependencies, and significant changes such as vendor updates, connector changes, or evolving regulations.

This is where compliance earns its keep. We are the ones who should be asking: What if the model changes quietly? What if the source data becomes stale? What if the retrieval layer excludes a critical policy update? What if the system routes something to the wrong approver? What if the tool is used in a context where a simpler and safer solution would do the job better?

COSO is right to emphasize scenario analysis and living risk registers. In the GenAI era, risk registers that only update annually are museum pieces.

Human-in-the-Loop Is Not Optional

When COSO turns to control activities, it gets very practical. It says GenAI outputs should be subject to human corroboration proportionate to risk, and in high-impact business, legal, or regulatory contexts, AI assistance should be segregated from authoritative decision-making. The paper also calls for version control, audit trails, access restrictions, change management, source citation requirements, segregation of duties, confidence thresholds, and documented approvals for configuration changes. That is the heart of responsible AI governance.

I was also struck by COSO’s discussion of reliance in an ICFR setting. The paper draws an important distinction between situations in which management relies on AI output as evidence of control effectiveness and situations in which a human independently re-performs the work. When true reliance exists, the evidentiary expectations rise: documented prompts, model versions, sampling rationale, exception resolution, and retained evidence.

Even beyond financial reporting, that concept is vital for compliance. The moment your team starts relying on GenAI output for sanctions reviews, due diligence summaries, monitoring alerts, investigative chronology, or policy interpretation, you have to ask a simple question: What is our evidence that this output was reliable enough to trust?

Monitoring Is Where the Real Work Begins

COSO’s final major lesson is that monitoring GenAI is not a one-and-done exercise. Organizations need continuous metrics and periodic deep reviews. They need to track precision, recall, exception volumes, latency, fairness, drift, and outcome quality. They need retraining triggers, rollback protocols, remediation logs, and playbooks for common AI control failures. COSO also makes the excellent point that in probabilistic systems, control failure may no longer be a simple pass-fail matter. Organizations may need multi-metric tolerance ranges across dimensions such as accuracy, bias, leakage, explainability, and change velocity.

That is a sophisticated and realistic view. Compliance teams should take it seriously because it reflects the world we are moving into. AI control effectiveness will not be judged solely by whether a control exists on paper. It will be judged by whether the organization can show that it monitors performance, investigates deviations, remediates failures, and adapts as the technology changes.

The Bottom Line

The real genius of the COSO GenAI framework is that it takes AI out of the abstract and puts it where it belongs: inside the machinery of governance. It turns the conversation from “Do we have an AI policy?” to “Do we have effective internal control over AI use?” That is a far better question.

For compliance officers, the action items are clear. Inventory your GenAI use cases. Classify them by capability. Identify owners. Assess risk dynamically. Put human review where the stakes justify it. Govern prompts and configurations, such as controlled assets. Monitor continuously. And do not let your AI strategy outrun your control environment.

Because in the end, the organizations that benefit most from GenAI will not be the ones that moved fastest with the fewest guardrails. They will be the ones who figured out how to innovate with discipline. That is not bureaucracy. That is a competitive advantage.

Categories
Daily Compliance News

Daily Compliance News: March 16, 2026, The Fighting Corruption ‘Not Worth It’ Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Rapper who fought corruption set to become Nepal’s PM. (CNN)
  • EDNY says fighting the appeal of the FIFA corruption case is not worth the resources. (Reuters)
  • UBS settles long-running whistleblower case. (Reuters)
  • Judge questions DOJ’s decision to drop Halkbank AML case. (Bloomberg)