Categories
Blog

Ongoing Monitoring: Why AI Governance Begins After Launch

In this blog post, we turn to the fourth major governance challenge in AI: ongoing monitoring. This is one of the most persistent weaknesses in AI governance. Organizations may build an intake process. They may create an approval committee. They may conduct risk reviews, privacy assessments, and validation testing before launch. All of that is important. But it is not enough.

AI risk does not freeze at the moment of approval. It changes over time. Use cases evolve. Employees adapt tools in unexpected ways. Vendors modify models. Controls weaken in practice. Regulatory expectations shift. What looked reasonable at launch may become inadequate six weeks later.

That is why ongoing monitoring is not an optional enhancement to AI governance. It is a core governance requirement. For boards and CCOs, the central question is not simply whether the company approved AI responsibly. It is whether the company has the discipline to govern it continuously once it is in the wild.

Approval Is Not Governance

One of the great temptations in AI governance is to confuse approval with control. A business unit proposes a use case, a committee reviews it, guardrails are listed, and the tool goes live. At that point, many organizations behave as though the governance work is largely complete. It is not.

Approval is a moment. Governance is a process. The problem is that companies often put their best people, clearest thinking, and highest scrutiny into the approval stage, then shift immediately into operational mode without building the same discipline around post-launch oversight. That leaves management blind to how the system actually performs under real-world conditions.

The Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) is especially instructive here. The ECCP does not ask merely whether a company has policies on paper. It asks whether the program works in practice, whether controls are tested, whether issues are investigated, and whether lessons learned are incorporated back into the compliance framework. AI governance should be viewed through the same lens. The question is not whether a control was described at launch. The question is whether that control continues to function and whether management would know if it stopped.

Why AI Risks Change After Launch

Post-deployment risk in AI does not arise because management failed to care on Implementation Day. It arises because AI systems operate in dynamic environments. A model may begin to drift as conditions change. A tool approved for one limited purpose may gradually be used for broader or higher-risk decisions. Employees may find workarounds that bypass the intended controls. Human reviewers may begin by scrutinizing outputs closely but, over time, may become overconfident, overloaded, or simply too reliant on the system. Vendors may update underlying functionality without the company fully appreciating the consequences. New regulations or regulatory interpretations may alter the risk landscape. Inputs may change. Outputs may become less reliable. Bias may surface in ways not identified in initial testing.

In other words, AI governance risk is not static. It is operational. That is why boards and CCOs must resist the notion that initial approval is the hardest part. In many respects, ongoing monitoring is harder because it requires sustained attention, clear metrics, escalation discipline, and the willingness to revisit prior assumptions.

The Governance Question

After implementation, the governance question changes. It is no longer simply, “Was this use case approved?” It becomes, “Is the use case still operating as expected, within risk tolerance, and under effective control?” That sounds simple, but it requires a much more mature oversight model than many companies currently have. It requires management to define what should be monitored, how frequently, by whom, and what changes or anomalies trigger escalation. It requires a reporting structure that does not simply celebrate adoption or efficiency gains, but surfaces incidents, deviations, near misses, and control fatigue.

For the board, the challenge is to insist on post-launch visibility. Board reporting on AI should not end with inventories and implementation updates. It should include information about ongoing performance, exception trends, complaints, incidents, validation results, vendor changes, policy breaches, and remediation efforts. A board that hears only that AI adoption is accelerating may not hear that AI governance is working.

For the CCO, the challenge is even more immediate. Compliance must ask whether the organization is gathering evidence that controls continue to function in practice. If it is not, then the governance program is still immature, no matter how polished its approval process may appear.

Monitoring What Matters

It all begins by identifying the right things to monitor. This cannot be a generic exercise. Monitoring should be tied to the specific use case, its risk classification, and its control environment. But there are some recurring categories that boards and CCOs should expect to see.

  1. Performance should be monitored. Is the tool still delivering outputs that are accurate, reliable, and appropriate for the intended purpose? Have error rates changed? Are there signs of drift or degraded quality?
  2. Control effectiveness should be monitored. Are human review requirements actually being followed? Are approval restrictions, access controls, or usage limitations still operating as designed? Is there evidence that employees are bypassing or weakening controls?
  3. Incidents and complaints should be monitored. Has the tool produced problematic results? Have customers, employees, or managers raised concerns? Have there been internal reports about bias, inaccuracy, misuse, or confidentiality risks?
  4. Changes in scope should be monitored. Is the tool still being used for the original purpose, or has it drifted into new contexts? Scope creep is one of the oldest compliance problems in business, and AI is no exception.
  5. External change should be monitored. Has a vendor updated the model? Have relevant laws, guidance, or industry expectations changed? Has a new regulatory concern emerged that requires reevaluation?

This is where the NIST AI Risk Management Framework is especially useful. NIST emphasizes that organizations must govern, measure, and manage AI risk over time, not simply identify it once. ISO/IEC 42001 reaches the same conclusion from a management systems perspective by requiring continual improvement, internal review, and adaptive controls. Both frameworks point to the same truth: effective AI governance is iterative, not episodic.

The CCO’s Role in Governance

For compliance professionals, ongoing monitoring is where the AI governance conversation becomes most familiar. This is where the CCO brings real institutional value. Compliance understands that controls weaken over time. Training decays. Workarounds emerge. Policies lose operational traction. Reporting channels capture issues others do not see. Root cause analysis matters. Corrective action must be tracked to closure. These are not new lessons. They are the daily work of compliance. AI gives them a new domain.

The CCO should insist that AI use cases have documented post-launch monitoring plans. These should identify the responsible owner, the metrics to be reviewed, the review frequency, the escalation triggers, and the process for documenting findings and remediation. High-risk use cases should not be left to passive observation. They should be actively governed.

The CCO should also ensure that AI monitoring is connected to the broader compliance ecosystem. Employee concerns raised through speak-up channels may reveal issues with the model. Internal investigations may expose misuse. Third-party due diligence may uncover changes to vendors. Training gaps may explain repeated incidents. AI governance should not be isolated from these functions. It should be integrated with them.

This is also where the CCO can most effectively help the board. Rather than presenting AI as a series of isolated technical matters, the CCO can frame post-launch governance in familiar compliance terms: monitoring, testing, escalation, remediation, and lessons learned.

Board Practice: Ask for More Than Adoption Metrics

One of the most important disciplines boards can develop is to stop mistaking usage information for governance information.

Management may report that AI adoption is growing, that productivity gains are material, or that pilot programs are expanding. Those data points may be relevant, but they are not a form of governance assurance. A board should want to know whether controls are operating, whether incidents are increasing, whether certain business units generate more exceptions, whether human review remains meaningful, and whether management has paused or modified any use cases based on real-world experience.

This is where board oversight becomes genuinely valuable. When the board asks for evidence of ongoing monitoring, it changes management behavior. It signals that AI success will not be measured solely by speed or efficiency, but also by discipline and resilience.

Boards should also ensure that high-risk use cases receive enhanced visibility. Not every AI tool merits the same level of board attention. But where AI affects regulated interactions, employment decisions, sensitive data, financial reporting, significant customer outcomes, or reputationally sensitive functions, ongoing board-level reporting should be expected.

Escalation and Remediation Must Be Built In

Monitoring matters only if it leads to action. There must be clear escalation and remediation protocols. When a material issue emerges, who gets notified? Can the use case be paused? Who determines whether the problem is technical, operational, legal, or cultural? How are facts gathered? How are corrective actions assigned? When is the board informed? How is the lesson fed back into policy, training, vendor management, or approval standards?

These processes should not be improvised. They should be documented. The organization should know in advance which incidents require escalation, which temporary controls may be imposed, and how remediation is tracked.

This is another place where the ECCP provides a useful governance model. DOJ expects companies not only to identify misconduct but also to investigate it, understand its root causes, and implement improvements that reduce the risk of recurrence. AI governance should work the same way. If a model fails or a control weakens, management should not merely fix the immediate problem. It should ask what the failure reveals about the program itself.

Documentation Is the Proof

As with every other element of effective governance, documentation is what turns intention into evidence. Post-launch AI governance should generate records that demonstrate monitoring occurred, issues were surfaced, escalations were handled, and remediation was completed. That may include performance reviews, validation updates, incident logs, committee minutes, complaint summaries, control testing records, vendor change notices, and corrective action trackers.

Without such documentation, management may believe it is effectively monitoring AI, but it will struggle to prove it to internal audit, regulators, or the board. More importantly, it will struggle to learn from experience in a disciplined way. A company that documents ongoing monitoring creates institutional memory. It can compare use cases, detect patterns, and refine its oversight model over time. That is how governance matures.

AI Governance Starts After Launch

The hardest truth in AI governance may be this: launching the tool is often the easiest part. The real challenge begins afterward. That is when optimism meets operational reality. That is when human reviewers become tired. That is when vendors update products. That is when regulators begin asking harder questions. That is when small problems become visible, or invisible, depending on whether the company has built a monitoring system capable of finding them.

For boards and CCOs, this is where governance earns its name. If the organization can monitor, escalate, remediate, and improve, then AI oversight has substance. If it cannot, then the company has not really governed AI at all. It has only been approved.

In the next and final blog post in this series, I will turn to the fifth governance challenge: culture, speak-up, and human judgment, because in many organizations, the first people to see an AI problem will not be the board, the CCO, or the governance committee. It will be the employee closest to the work.

Categories
Hill Country Hustlers

Hill Country Hustlers: Let’s Talk About It: Hill Country MHDD’s Family Partner Program and the YES Waiver

We take things in a different direction today as Zach steps back from behind the microphone to produce an episode with members of the Hill Country MHDD Center. The members are Kelsi Wilmot (Director of Community Development), Tyler Townsend (Communication Specialist), and Wanda Ferguson (Lead Family Partner).

They introduce Hill Country MHDD’s new podcast, “Let’s Talk About It,” intended to help audiences learn about staff, lived experience, and agency programs. Ferguson explains the Family Partner role, emphasizing advocacy for caregivers, collaboration with schools and juvenile justice, and skills-based supports such as the nurturing program to help families accommodate a child’s needs while maintaining structure and boundaries. She shares personal motivation connected to her son Ryan’s mental health challenges and death in 2016, and provides examples of helping families avoid juvenile detention, address safety risks, and stabilize at home. The team describes the YES Waiver as a wraparound, grant-funded service designed to keep children in their homes and reduce hospitalizations or residential placements, and notes that services are optional and Medicaid-billable.

Key highlights:

  • Why This Podcast
  • What Family Partners Do
  • Parenting Tools and Real Stories
  • YES Waiver Explained
  • New Programs and Facilities
  • Getting Enrolled

Resources:

Hill Country MHDD

Categories
Pod and Port

Pod and Port: Podcasting, Social Media and Yacht Rock – AI, Authenticity, Instagram, and Christopher Cross

In the debut episode of Pod & Port: Podcasting, Social Media and Yacht Rock, Tom Fox and Jeff Dwoskin dive into one of the biggest questions facing creators, marketers, podcasters, and business owners today: how do you use AI and social media tools effectively without losing authenticity?

Tom and Jeff discuss Instagram, creator monetization, transparency, algorithmic control, and the dangers of relying on AI for generic content. Their central message is clear: AI can be a powerful tool, but it should enhance your creativity, not replace it. The conversation offers practical insights into how creators can think about content, voice, originality, and audience trust.

Then the show shifts into Yacht Rock mode, with Jeff leading a spotlight on Christopher Cross, one of the genre’s defining voices. From “Ride Like the Wind” to “Sailing” and beyond, Tom and Jeff reflect on Cross’s impact, his remarkable success, and why his music still resonates. If you care about smarter content creation and smooth musical memories, Episode 1 has you covered.

Key takeaways:

  • AI works best when it enhances your ideas rather than replacing your creativity.
  • Authenticity still matters, and audiences can often sense when content feels overly automated.
  • Social media platforms may offer more tools, but creators still need to stay grounded in their own voice.
  • Transparency and trust remain critical for audience engagement.
  • Christopher Cross remains one of the essential artists in any Yacht Rock conversation.

Resources:

Jeff

Jeff Dwoskin on LinkedIn

Stampede Social Website

Christopher Cross on Spotify

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
GSK in China: 13 Years Later

GSK In China: 13 Years Later – Whistleblower Emails, a Sex Tape, and the Compliance Failures That Triggered a Global Bribery Probe

Thirteen years after the GSK China scandal exploded onto the global stage, its lessons remain as urgent as ever for compliance professionals and business leaders. In this podcast series, we revisit the case not simply as corporate history, but as a living cautionary tale about culture, incentives, third parties, investigations, and governance. Each episode explores what went wrong, why it went wrong, and how those failures still echo in today’s compliance and ethics landscape. Join us as we unpack the scandal and draw practical lessons for building stronger, more resilient organizations. This episode dissects how an anonymous “GSK whistleblower” email campaign—culminating in a covertly filmed sex tape of China executive Mark Reilly—triggered a wider reckoning over alleged systemic bribery in GSK’s China business.

Drawing on reporting from MailOnline, The Wall Street Journal, The Sunday Times, and Time, it outlines claims of a £320m bribery budget routed through third-party travel agencies via fake or inflated medical conferences, with allegations extending to sexual favors, and how GSK initially treated the tape as a compartmentalized security/blackmail issue. GSK hired China-based investigators, Peter Humphrey and Yu Yingzeng, to identify the source; they failed and were arrested for privacy-law violations, as Chinese police opened a formal bribery probe that led to charges against Reilly and 45 others. The fallout expanded to the UK SFO and potential U.S. FCPA exposure via GSK’s NYSE listing, framed against pervasive surveillance risks in China and the dangers of “toothless” internal investigations.

Key highlights:

  • Stranger Than Fiction
  • The Sex Tape Email
  • Whistleblower Bribery Claims
  • Hiring China Wise
  • Investigators Arrested

Resources:

GSK in China: A Game Changer for Compliance on Amazon.com

GSK in China: Anti-Bribery Enforcement Goes Global on Amazon.com

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Ed. Note: the voices of the hosts, Timothy and Fiona, were created by Notebook LM based upon text written by Tom Fox

Categories
Daily Compliance News

Daily Compliance News: April 9, 2026, The FCPA Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Federal judge to dismiss FCPA conviction. (National Today)
  • Smartmatic FCPA prosecution. (Law Fare Media)
  • Top 10 International ABC developments from March. (MOFO)
  • AI goes on charm offensive. (WSJ)
Categories
AI Today in 5

AI Today in 5: April 9, 2026, The Mythos Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Human in the loop as the ultimate moat. (FastCompany)
  2. AI washing in compliance. (FinTechGlobal)
  3. AI is accelerating cyber attacks. (BankInfoSecurity)
  4. AI and virtual care in eye healthcare. (UM)
  5. Is Anthropic’s Mythos dangerous? (The Economist)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Blog

Data Governance, Privacy, and Model Integrity: The Control Foundation of AI Governance

Artificial intelligence may look like a technology story on the surface, but beneath that surface lies a governance reality every board and Chief Compliance Officer must confront. AI systems are only as sound as the data that feeds them, the controls that govern them, and the integrity of the outputs they generate. When data governance is weak, privacy obligations are poorly managed, or model integrity is assumed rather than tested, AI risk can move quickly from a technical flaw to enterprise exposure.

In the prior blog posts in this series, I examined the foundational questions of AI governance: board oversight and accountability, and the danger of strategy outrunning governance. Today, I want to turn to a third issue that sits at the core of every credible AI governance program: data governance, privacy, and model integrity.

This is where the AI conversation often moves from excitement to discipline. Companies may be eager to deploy tools, automate functions, and improve decision-making. But none of that matters if the underlying data is flawed, sensitive information is mishandled, or the model produces outputs that are unreliable, biased, or impossible to explain in context—the more powerful the technology, the more important the governance framework beneath it.

For boards and CCOs, this is not simply a technical control matter. It is a governance matter because failures in data integrity, privacy management, and model performance can have legal, regulatory, reputational, financial, and cultural consequences simultaneously.

AI Governance Begins with the Data

There is an old saying in technology: garbage in, garbage out. In the AI era, that phrase remains true, but it is no longer sufficient. In corporate governance terms, the problem is not merely bad data. It is unknown, unauthorized, untraceable, biased, stale, overexposed, or used in ways the organization never properly approved. That is why data governance is the control foundation of AI governance.

Every AI use case depends on inputs. Those inputs may include structured internal data, public information, personal data, third-party data, proprietary records, historical documents, transactional records, prompts, or user interactions. If management does not understand where that data comes from, who has rights over it, whether it is accurate, how it is classified, and whether it is appropriate for the intended purpose, then the company is not governing AI. It is merely using it.

For compliance professionals, this point should feel familiar. Data governance is not new. What is new is the speed and scale at which AI can amplify data weaknesses. A spreadsheet error may affect one report. A flawed AI input may affect thousands of interactions, recommendations, or decisions before anyone notices.

Why Boards Should Care About Data Lineage

Boards do not need to become technical experts in model training or data architecture. But they do need to ask whether management understands the provenance and reliability of the information flowing into critical AI systems.

At a governance level, this is a question of data lineage. Can the company trace the source of the data, how it was curated, whether it was changed, and whether it was approved for the intended use? If a customer, regulator, employee, or auditor asks why the system reached a particular result, can management explain not only the output, but the data conditions that shaped it?

A board that does not ask these questions risks receiving polished dashboards and impressive demonstrations while missing the underlying weaknesses. AI systems can sound authoritative even when they are wrong. That is part of what makes governance here so essential. Confidence is not the same as integrity.

This is also where the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) offers a helpful mindset. The ECCP pushes companies to think in terms of operational reality. Do policies work in practice? Are controls tested? Is the company learning from what goes wrong? The same discipline applies here. A company should not assume its data environment is fit for AI simply because it has data available. It should test, verify, document, and challenge that assumption.

Privacy Is Not an Adjacent Issue

Too many organizations still treat privacy as adjacent to AI governance rather than central to it. That is a mistake. AI systems often rely on data sets that include personal information, employee information, customer records, usage patterns, communications, or behavior-based inputs. Even when a company believes it has de-identified or anonymized data, there may still be re-identification risks, overcollection concerns, retention issues, or use limitations tied to law, contract, or internal policy.

For the board and the CCO, privacy should not be discussed as a compliance side note. It should be part of the approval and governance architecture from the outset. Before an AI use case is deployed, management should understand what personal data is involved, whether its use is permitted, what notices or disclosures apply, what access restrictions are required, how the data will be retained, and whether any vendor relationships create additional privacy exposure.

This is particularly important in generative AI environments, where employees may paste confidential, proprietary, or personal information into tools without fully appreciating the consequences. A privacy incident in the AI context may not begin with malicious intent. It may begin with convenience. That is why governance must focus not only on policy, but on system design, training, and usage constraints.

The CCO has a critical role here because privacy governance often intersects with policy management, employee conduct, training, investigations, and disciplinary response. If privacy is left solely to specialists without integration into the broader governance process, the organization risks building fragmented controls that do not hold together under pressure.

Model Integrity Is a Governance Question

Model integrity sounds like a technical term, but it is really a governance concept. It asks whether the system is performing in a manner consistent with its intended purpose, risk classification, and control expectations.

That means asking hard questions. Is the model accurate enough for the use case? Has it been validated before deployment? Are there known limitations? Does it perform differently across populations or scenarios? Can outputs be reviewed in a meaningful way by human decision-makers? Are there conditions under which the model should not be used? These are not engineering questions alone. They are governance questions because they determine whether management is relying on the system responsibly.

This is where NIST’s AI Risk Management Framework is especially valuable. NIST emphasizes that organizations should map, measure, and manage AI risks, including those related to validity, reliability, safety, security, resilience, explainability, and fairness. It is not enough to say that a tool works most of the time. The organization must understand where it may fail, how failure will be detected, and what safeguards are in place when it does.

ISO/IEC 42001 reinforces the same discipline through the lens of management systems. It requires structured attention to risk identification, control design, monitoring, documentation, and continual improvement. In other words, it treats model integrity not as a technical aspiration, but as an organizational responsibility. For boards, the takeaway is direct: if management cannot explain how model integrity is validated and maintained, then the board does not yet have assurance that AI is being governed effectively.

Third Parties Increase the Stakes

One of the more dangerous assumptions in AI governance is that outsourcing technology also outsources risk. It does not. Many organizations will deploy AI through third-party vendors, embedded tools, software platforms, or external service providers. That may be practical, even necessary. But it also means the company may be relying on data practices, training methods, model assumptions, or privacy safeguards it did not design and cannot fully see.

That is why data governance, privacy, and model integrity must extend to third-party risk management. Procurement cannot focus solely on functionality and price. Legal cannot focus solely on contract form. Compliance, privacy, security, and risk all need to understand what the vendor is doing, what data is being used, what rights the company has to inspect or question performance, and what happens when the vendor changes the model or its underlying terms.

This is not simply good vendor management. It is a governance necessity. A company remains accountable for business decisions made using third-party AI tools, especially when those tools affect customers, employees, compliance obligations, or regulated activities.

Documentation Is What Makes Governance Real

As with every major governance issue, documentation is what turns theory into evidence. If a company is serious about data governance, privacy, and model integrity, it should have records that show it. Those records may include data inventories, data classification standards, model validation summaries, privacy assessments, vendor due diligence files, testing results, approved use cases, control requirements, escalation logs, and remediation actions. Without this documentation, governance becomes anecdotal. With it, governance becomes reviewable, auditable, and improvable.

This is another place where the ECCP mindset is so useful. Prosecutors and regulators tend to ask the same core question in different ways: how do you know your program works? In the AI context, the answer cannot be “our vendor told us so” or “the business says the tool is helpful.” It must be grounded in evidence, testing, and management discipline.

What Boards and CCOs Should Be Pressing For

Boards should expect management to present AI use cases with enough clarity to answer four questions. What data is being used? What privacy implications attach to that use? How has model integrity been tested? What controls will remain in place after deployment?

CCOs should press equally hard from the management side. Is there a documented data governance process for AI? Are privacy reviews built into the intake and approval process? Are models validated according to risk? Are third-party tools subject to diligence and contract controls? Are incidents and anomalies logged and investigated? Are employees trained not to expose confidential or personal information through improper use? These are not burdensome questions. They are the practical questions that separate governed AI from hopeful AI.

Governance Requires Trustworthy Inputs and Defensible Outputs

In the end, AI governance depends on a simple but demanding truth: the organization must be able to trust what goes into the system and defend what comes out of it.

If the data is poorly governed, privacy rights are handled casually, or model integrity is assumed rather than demonstrated, then no amount of strategic enthusiasm will make the program safe. Boards will not have real oversight. CCOs will not have a defensible control environment. The company will merely have a faster way to create risk.

That is why data governance, privacy, and model integrity are not support issues in AI governance. They are central issues. They determine whether the enterprise is using AI with discipline or simply hoping for the best.

In the next article in this series, I will turn to the fourth governance challenge: ongoing monitoring, where many organizations discover that approving an AI use case is far easier than governing it after it goes live.

Categories
Compliance Into the Weeds

Compliance into the Weeds: Duty Owed vs. Material Nonpublic Information: Prediction Markets and Compliance

The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to explore it more fully. Looking for some hard-hitting insights on compliance? Look no further than Compliance into the Weeds! In this episode of Compliance into the Weeds, Tom Fox and Matt Kelly discuss prediction markets and their implications for compliance.

Tom and Matt focus on the phrase “violation of a duty owed” by employees and note that this standard appears significantly broader than traditional insider trading laws. They explain that insider trading law centers on the disclosure of material nonpublic information, whereas a “duty owed” framework emphasizes the underlying duty itself. Because “duty owed” could encompass obligations beyond material nonpublic information, the speaker highlights the potential compliance implications and expresses interest in exploring a related hypothetical scenario.

Resources:

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

A multi-award-winning podcast, Compliance into the Weeds was most recently honored as one of the Top 25 Regulatory Compliance Podcasts, a Top 10 Business Law Podcast, and a Top 12 Risk Management Podcast. Compliance into the Weeds has been conferred a Davey, a Communicator Award, and a W3 Award, all for podcast excellence.

Categories
Daily Compliance News

Daily Compliance News: April 8, 2026, The Fleeing Binance Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Social engineering scams in banking. (FT)
  • Tariff fraud and accounting tricks. (NYT)
  • Compliance professionals are leaving Binance. (Bloomberg)
  • Dirty accounting jobs and AI. (WSJ)
Categories
AI Today in 5

AI Today in 5: April 8, 2026, The AI in Professional Services Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. AI is increasing social engineering scams. (FT)
  2. Advancing compliance efficiency with AI. (Yahoo!Finance)
  3. AI governance really matters. (HR Brew)
  4. Privacy and AI. (BlufftonToday)
  5. AI to automate professional services. (FinTechGlobal)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.