Categories
Compliance Into the Weeds

Compliance into the Weeds: Duty Owed vs. Material Nonpublic Information: Prediction Markets and Compliance

The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to explore it more fully. Looking for some hard-hitting insights on compliance? Look no further than Compliance into the Weeds! In this episode of Compliance into the Weeds, Tom Fox and Matt Kelly discuss prediction markets and their implications for compliance.

Tom and Matt focus on the phrase “violation of a duty owed” by employees and note that this standard appears significantly broader than traditional insider trading laws. They explain that insider trading law centers on the disclosure of material nonpublic information, whereas a “duty owed” framework emphasizes the underlying duty itself. Because “duty owed” could encompass obligations beyond material nonpublic information, the speaker highlights the potential compliance implications and expresses interest in exploring a related hypothetical scenario.

Resources:

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

A multi-award-winning podcast, Compliance into the Weeds was most recently honored as one of the Top 25 Regulatory Compliance Podcasts, a Top 10 Business Law Podcast, and a Top 12 Risk Management Podcast. Compliance into the Weeds has been conferred a Davey, a Communicator Award, and a W3 Award, all for podcast excellence.

Categories
Daily Compliance News

Daily Compliance News: April 8, 2026, The Fleeing Binance Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Social engineering scams in banking. (FT)
  • Tariff fraud and accounting tricks. (NYT)
  • Compliance professionals are leaving Binance. (Bloomberg)
  • Dirty accounting jobs and AI. (WSJ)
Categories
AI Today in 5

AI Today in 5: April 8, 2026, The AI in Professional Services Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. AI is increasing social engineering scams. (FT)
  2. Advancing compliance efficiency with AI. (Yahoo!Finance)
  3. AI governance really matters. (HR Brew)
  4. Privacy and AI. (BlufftonToday)
  5. AI to automate professional services. (FinTechGlobal)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Blog

Board Oversight and Accountability in AI: Where Governance Begins

For boards and Chief Compliance Officers, AI governance does not begin with the model. It begins with oversight, accountability, and the discipline to define who owns risk, who makes decisions, and who answers when something goes wrong. If AI is changing how companies operate, then board governance and compliance leadership must change as well.

In the first article in this series, I laid out the five significant corporate governance challenges around artificial intelligence: board oversight and accountability, strategy outrunning governance, data governance and model integrity, ongoing monitoring, and culture and speak-up. In Part 2, I turn to the first and most foundational issue: board oversight and accountability.

This is where every AI governance program either starts with rigor or begins with ambiguity. And ambiguity, in governance, is rarely neutral. It is usually the breeding ground for failure.

There is a tendency in some organizations to treat AI oversight as a natural extension of technology oversight. That is too narrow. AI touches legal exposure, regulatory risk, data governance, privacy, discrimination concerns, intellectual property, operational resilience, internal controls, and corporate culture. That makes AI a board-level and CCO-level issue, not just a CIO issue.

The central governance question is straightforward: who is responsible for AI risk, and how is that responsibility exercised in practice? If the board cannot answer that question, if management cannot explain it, and if the compliance function is not part of the answer, then the company does not yet have credible AI governance.

Why Board Oversight Matters Now

Boards have always been expected to oversee enterprise risk. What has changed with AI is the speed, scale, and opacity of the risks involved. A business process can be altered quickly by a generative AI tool. A model can influence customer interactions, internal decisions, and external communications at scale. Employees can adopt AI capabilities before governance structures are fully formed. Vendors can embed AI inside products and services without management fully understanding the downstream implications. That is why AI cannot be governed informally. It requires deliberate oversight.

The board does not need to manage models line by line. That is not its role. But the board must ensure that management has established a governance structure capable of identifying AI use cases, classifying risk, escalating significant issues, testing controls, and reporting failures. Just as important, the board must know who inside management is accountable for making that system work.

This is where the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) offers a very practical lens. The ECCP asks whether a compliance program is well designed, adequately resourced, empowered to function effectively, and tested in practice. Those four questions are equally powerful in evaluating AI governance. Is the governance structure well designed? Is it resourced? Is the compliance function empowered in AI decision-making? Is the program working in practice? If the answer to any of those questions is uncertain, the board should treat that uncertainty as a governance gap.

Accountability Begins with Ownership

One of the oldest problems in corporate governance is fragmented responsibility. AI only intensifies that risk. Consider the typical organizational landscape. IT may own its own infrastructure. Legal may review contracts and liability. Privacy may address data use. Security may focus on cyber threats. Risk may handle enterprise frameworks. Compliance may address policy, controls, investigations, and reporting. Business leaders may champion the use case. Internal audit may come in later for assurance. The board, meanwhile, receives updates from multiple directions.

Without a clearly defined operating model, this becomes a classic accountability fog. Everyone has a slice of the issue, but no one owns the whole risk. A more disciplined approach requires naming an accountable executive owner for enterprise AI governance; in some companies, that may be the Chief Risk Officer. In others, it may be a Chief Legal Officer, Chief Compliance Officer, or a designated senior executive with cross-functional authority. The title matters less than the clarity. The organization must know who convenes the process, who resolves conflicts, who signs off on high-risk use cases, and who reports upward to the board.

For the CCO, this does not mean taking sole ownership of AI. That would be unrealistic and unwise. But it does mean insisting that compliance has a defined role in the governance architecture. AI raises issues of policy adherence, training, escalation, investigations, third-party risk, disciplinary consistency, and remediation. Those are core compliance issues. A governance model that sidelines the CCO is not merely incomplete; it is unstable.

The Right Committee Structure

Once ownership is established, the next question is structural: where does AI governance live? The answer should be enterprise-wide, but with a defined committee architecture. Companies need at least two governance layers.

The first is a management-level AI governance committee or council. This should be a cross-functional working body with representation from compliance, legal, privacy, security, technology, risk, internal audit, and relevant business units, as appropriate. Its purpose is operational governance. It reviews proposed use cases, classifies risk levels, evaluates controls, addresses issues, and determines escalation.

The second is a board-level oversight mechanism. This does not always require a new standing AI committee. In some organizations, oversight may sit with the audit committee, risk committee, technology committee, or full board, depending on the company’s structure and maturity. What matters is not the name of the committee. What matters is that there is an identified board body with responsibility for overseeing AI governance and receiving regular reporting.

This is consistent with the NIST AI Risk Management Framework, which begins with the “Govern” function. NIST recognizes that governance is not an afterthought; it is the foundation that enables the rest of the risk management lifecycle. ISO/IEC 42001 similarly reinforces that AI governance must be embedded in a management system with defined roles, controls, review mechanisms, and continuous improvement. Both frameworks point in the same direction: AI governance requires structure, not aspiration.

Reporting Lines That Actually Work

Good governance lives or dies by reporting lines. If information cannot move efficiently upward, then oversight will be stale, filtered, or incomplete. Boards should require periodic reporting on several core areas: the current AI inventory, high-risk use cases, incident trends, control exceptions, third-party AI dependencies, regulatory developments, and remediation status. The board does not need a data dump. It needs decision-useful reporting.

That means management should create a formal reporting cadence. Quarterly reporting is sufficient for many organizations, but high-risk environments require more frequent updates. The reporting should identify not only what has been approved, but what has changed. That includes scope changes, incidents, near misses, new vendors, policy exceptions, and any material concerns raised by employees, customers, or regulators.

The CCO should be part of the reporting chain, not a bystander. A balanced governance model allows compliance to elevate concerns independently if necessary, particularly when a business leader is pushing to move faster than controls will support. That is not an obstruction. That is governance doing its job.

Escalation Protocols: The Missing Middle

Many companies have approval procedures, but far fewer have robust escalation protocols. That is a mistake. Governance fails only when there is no structure. It also fails when there is no clear path for handling edge cases, incidents, or disagreements.

An effective AI governance program should specify escalation triggers. For example, a use case should be escalated when it affects employment decisions, consumer rights, regulated communications, financial reporting, sensitive personal data, or legally significant outcomes. Escalation should also occur when there is evidence of model drift, hallucinations in a material context, unexplained bias, control failure, a third-party vendor issue, or a credible employee concern.

These triggers should not live in someone’s head. They should be documented in policy, operating procedures, or a risk classification matrix. There should also be a defined process for who gets notified, what interim controls are applied, whether deployment pauses are available, and how issues are documented for follow-up.

This is another place where the ECCP remains highly relevant. DOJ prosecutors routinely ask whether issues are escalated appropriately, whether investigations are timely, and whether lessons learned are incorporated into the program. AI governance should be built with the same operational seriousness. If an issue arises, the company should not be improvising its governance response in real time.

Documentation Is Evidence of Governance

One of the great compliance truths is that governance without documentation is hard to prove and harder to sustain. For AI governance, documentation should include at least these categories: use case inventories, risk classifications, approval memos, committee minutes, control requirements, incident logs, training records, validation summaries, escalation decisions, and remediation actions. This is not paperwork for its own sake. It is the evidentiary trail that shows the organization is governing AI thoughtfully and consistently.

Boards should care about this because documentation is what allows oversight to be more than anecdotal. It is also what allows internal audit, regulators, and investigators to assess whether the governance program is functioning.

For the CCO, documentation is particularly important because it connects AI oversight to the larger compliance architecture. It helps align AI governance with policy management, training, investigations, speak-up systems, third-party due diligence, and corrective action tracking. In other words, it turns AI governance from a loose collection of meetings into a defensible management process.

Board Practice and CCO Practice Must Meet in the Middle

The best AI governance models do not pit the board and the compliance function against innovation. They create a structure that allows innovation to move, but only within defined guardrails. Boards should ask sharper questions. Who owns AI governance? What committee reviews high-risk use cases? What issues must be escalated? What reporting do we receive? How are incidents tracked and remediated? What role does compliance play?

CCOs should be equally direct. Where does compliance sit in the approval process? How do employees report AI concerns? What documentation is required? When can compliance elevate an issue on its own? How are lessons learned being fed back into policy and training?

This is the practical heart of the matter. Oversight is not a slogan. Accountability is not a press release. Both must be built into reporting lines, committee design, escalation protocols, and documentation discipline.

AI governance begins here because every other issue in this series depends on it. If oversight is weak and accountability is blurred, strategy will outrun governance, data issues will go unnoticed, monitoring will become inconsistent, and culture will not carry the load. But if the board and CCO get this first issue right, they create the governance spine that the rest of the program can rely on.

Join us tomorrow, where we review the rule of data governance in AI governance, because that is where every effective AI governance program either starts strong or starts to fail.