Categories
AI Today in 5

AI Today in 5: April 14, 2026, The AI Tastes Like Twinkies Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Kara Swisher says AI: ‘It tastes like a Twinkie. ’(Fortune)
  2. AI must move beyond name matching in sanctions. (FinTechGlobal)
  3. Healthcare needs to prepare for enforcement around AI use. (HealthcareITNews)
  4. Getting AI insurance. (CCI)
  5. Balancing AI innovation with compliance for RIAs. (FinTechGlobal)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Daily Compliance News

Daily Compliance News: April 14, 2026, The Annoyance Economy Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • How much does the Annoyance Economy cost you? (NYT)
  • Texas AG to probe ‘chemicals in clothes’. (Reuters)
  • Spanish PM’s wife charged with corruption. (Bloomberg)
  • KPMG reduces the role of humans in audits. (WSJ)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Innovation in Compliance

Innovation in Compliance: Carole Switzer on Mastering GRC, the AI-Enabled Law Firm, and the Future of Legal Leadership

Innovation spans many areas, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode,  host Tom visits with GRC expert and OCEG co-founder Carole Switzer.

They highlight her new books, “Mastering GRC: The Lawyer’s Guide to Success in Governance, Risk and Compliance” and “The AI-Enabled Law Firm” (co-authored with Lee Denner). Carole explains she wrote “Mastering GRC” to help lawyers applying legal skills in GRC roles move from reactive problem-solvers to proactive enterprise leaders by embedding in business objectives, asking better questions, and collaborating across audit, risk, legal, and compliance. She recounts OCEG’s origins and its GRC Capability Model, certifications, and global growth. Carole discusses balancing legal oversight with business partnership, including the risks of privilege when acting in business roles. Looking ahead, she predicts rapid AI-driven change in legal practice, stressing technology and data-meaning (“semantic layer”) issues, and the need to adapt existing GRC frameworks for speed and volatility.

Key highlights:

  • Why These Two Books
  • From Counselor to Leader
  • Integrated Governance Mindset
  • How OCEG Built GRC Standards
  • Oversight vs Business Partner
  • Future of Legal GRC and AI
  • Managing Volatility With Frameworks

Resources:

Carole Switzer on LinkedIn

OCEG

The AI-Enabled Law Firm

Mastering GRC: The Lawyer’s Guide to Success in Governance, Risk and Compliance

Innovation in Compliance, a multi-award-winning podcast, was recently honored as the Number 4 podcast in Risk Management by 1,000,000 Podcasts.

Categories
SBR - Authors' Podcast

SBR-Author’s Podcast: Bribery Beyond Borders: The Hidden History and Future of the FCPA with Severin Wirz

Welcome to the SBR-Author’s Podcast! In this podcast series, Host Tom Fox visits with authors in the compliance arena and beyond. In this episode, Tom Fox welcomes Severin Wirz, Senior Director for Ethics and Compliance at Applied Materials and author of “Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act.”

The book is about the origins, narrative approach, and future of FCPA enforcement. Severin explains he used storytelling to challenge conventional views that the FCPA was merely post-Watergate morality or the product of Stanley Sporkin alone, tracing a longer lineage of U.S. anti-corruption laws (Federal Corrupt Practices Act, Travel Act, Hobbs Act, RICO, Bank Secrecy Act) instead. He recounts the United Brands/Eli Black scandal, Wall Street Journal reporting that broke key bribery revelations, Stanley Sporkin’s role at the SEC, and the legislative influence of Frank Church and William Proxmire amid debates over disclosure, criminalization, and books-and-records provisions. Severin discusses shifting geopolitical drivers of enforcement and recommends related reading and contact options.

Key highlights:

  • Severin’s Compliance Journey
  • Writing FCPA as a Thriller
  • Eli Black Scandal Breaks
  • Sporkin and SEC Momentum
  • Journalists Connect the Dots
  • Frank Church and Cold War Stakes
  • Proxmire Gets It Passed
  • Three Visions of the FCPA
  • Future Enforcement and Geopolitics

Resources:

Severin Wirz on LinkedIn

Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act on CCI and Amazon.

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Blog

Culture, Speak-Up, and Human Judgment: The Human Side of AI Governance

Artificial intelligence may be built on data, models, and code, but governance ultimately rests on people. For boards and Chief Compliance Officers, one of the most important questions is not only whether the organization has responsibly approved AI tools, but also whether employees are prepared to challenge them, report concerns, and apply human judgment when something does not look right. In many organizations, the earliest warning system for AI failure is not a dashboard. It is the workforce.

Over the course of this series, I have explored four critical governance challenges in AI: board oversight and accountability, strategy outrunning governance, data governance and privacy, and ongoing monitoring. This final blog post turns to the fifth and most underappreciated challenge of all: culture, speak-up, and human judgment.

Underappreciated because organizations often begin AI governance with structure in mind. They build committees, draft policies, classify risks, and establish approval gates. All of that is necessary. But structure alone is not sufficient. If the human beings closest to the work do not understand their role in AI governance, do not feel empowered to raise concerns, or begin to defer too readily to machine-generated outputs. The governance framework will be weaker than it appears on paper.

This is the point many companies miss. AI governance is not only about the technology. It is about whether the organization’s culture supports the responsible use of technology.

Employees Will See AI Failures First

In many companies, the first person to notice an AI problem will not be a board member, a Chief Executive Officer, or even a member of the governance committee. It will be an employee interacting with the tool in daily operations. It may be the customer service representative who sees the system generating inaccurate responses. It may be the HR professional who notices troubling patterns from an AI-supported screening tool. It may be the sales employee who sees a generative tool overstating product claims. It may be the finance professional who questions an automated summary that does not match underlying records. It may be the compliance analyst who sees a tool being used for an unapproved purpose.

That matters because early visibility is one of the most valuable protections a company can have. But visibility only becomes a control if employees know what to do with what they see. That is why culture is a governance issue. A workforce may spot the problem, but if employees do not understand that AI-related concerns are reportable, are unsure where to raise them, or believe management will ignore them, the warning system fails.

For boards and CCOs, that means AI governance cannot stop at policy creation. It must extend into behavior, reporting norms, and organizational trust.

Speak-Up Culture Is an AI Governance Control

Compliance professionals have long known that a speak-up culture is a control. It is often the first way a company learns of misconduct, process breakdowns, weak supervision, retaliation, harassment, fraud, or control evasion. The same principle now applies with equal force to AI.

Employees may observe biased outputs, inaccurate recommendations, privacy concerns, unexplained model behavior, misuse of tools, inappropriate reliance on machine-generated content, or efforts to bypass required human review. If they do not report those concerns, management may have no timely way to know what is happening.

This is where the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) remains highly instructive. The ECCP places substantial emphasis on whether employees are comfortable raising concerns, whether the company investigates them appropriately, and whether retaliation is prohibited in practice. Those same questions should now be asked in the context of AI. Does the company’s reporting framework explicitly include AI-related concerns? Are managers trained to recognize and escalate those concerns? Are reports investigated with the same seriousness as other compliance issues? Are employees protected if they raise uncomfortable questions about a tool the business wants to use?

If the answer is no, the company may have AI procedures, but does not yet have embedded AI governance in its culture.

Human Judgment Cannot Be Optional

One of the most significant risks in AI governance is not simply that a model will be wrong. It is that people will stop questioning it. AI systems can produce outputs quickly, fluently, and with apparent confidence. That creates a powerful temptation for users to over-trust the tool. When a system sounds polished, appears efficient, and reduces workload, people may assume that its conclusions deserve deference. This is precisely where governance needs the corrective force of human judgment.

Human judgment cannot be treated as a ceremonial step or a paper requirement. It must be meaningful. That means the people reviewing AI outputs must have the authority, time, training, and confidence to challenge those outputs when needed. A human review requirement that exists only on paper is not much of a safeguard. If reviewers are overloaded, insufficiently trained, or culturally discouraged from slowing the process, the control may be largely illusory.

Boards should care about this because one of the easiest mistakes management can make is to describe human oversight in governance documents without testing whether it is functioning in practice. CCOs should care because this is a classic compliance problem. A control may be designed elegantly but fail in daily operations because the supporting culture is too weak to sustain it.

Training Must Change with AI

A company cannot expect good judgment around AI if it has not trained people on what good judgment looks like. That means AI training should go beyond technical usage instructions. Employees need to understand what risks may arise, what concerns are reportable, what approved use looks like, what prohibited use looks like, and why human challenge matters. Managers need additional training because they are often the first informal escalation point when an employee raises a concern. If managers dismiss AI concerns as overreactions, inconveniences, or resistance to innovation, the speak-up system will quickly lose credibility.

Training should also be role-based. The risks faced by a customer-facing team may differ from those faced by teams in HR, legal, procurement, marketing, finance, or internal audit. A generic AI training module may create awareness, but it will not create the operational judgment needed in high-risk areas.

This is where the NIST AI Risk Management Framework provides practical value. NIST’s emphasis on governance is not limited to formal structures. It contemplates culture, accountability, and the need for organizations to support informed decision-making across the enterprise. ISO/IEC 42001 similarly reinforces the importance of organizational competence, awareness, and defined responsibilities. Both frameworks point to a critical truth: responsible AI use depends not only on controls over the technology, but also on the capabilities of the people who use and oversee it.

Managers Matter More Than Companies Often Realize

If culture is the operating environment of governance, managers are often its most important local translators. An employee may not begin by filing a formal report. More often, an employee may raise a concern informally with a supervisor or colleague. “This output does not seem right.” “I do not think we should be using it this way.” “This seems to be pulling in sensitive information.” “This recommendation may be biased.” “The human review is not really happening anymore.”

The manager’s response in that moment matters enormously. Does the manager take the concern seriously? Does the manager know it should be escalated? Does the manager see it as a governance issue or as resistance to efficiency? Does the manager understand the difference between a minor usability complaint and a potentially significant compliance concern?

This is why boards and CCOs should not think about speak-up solely in hotline terms. AI governance depends on the broader management culture. If supervisors are not equipped to receive and escalate AI concerns appropriately, many issues will die in the middle of the organization before they ever reach a formal channel.

Anti-Retaliation Must Be Real in the AI Context

There is another dimension that cannot be overlooked: the risk of retaliation. In some organizations, employees may hesitate to raise AI concerns because they fear being labeled anti-innovation, obstructionist, or not commercially minded. That creates a subtle but serious governance risk. If the corporate atmosphere celebrates rapid AI adoption without equally celebrating responsible challenge, then employees may conclude that silence is safer than candor.

This is why anti-retaliation messaging must be explicit in the AI context. The company should make clear that raising concerns about inaccurate outputs, misuse, privacy risks, unfairness, or control breakdowns is part of responsible business conduct. It is not a failure to embrace innovation. It is a contribution to the effective governance of innovation.

The CCO should ensure that AI-related concerns are incorporated into existing anti-retaliation frameworks, investigations protocols, and communications. Boards should ask whether employee sentiment data, hotline trends, and internal investigations provide any signal that people are reluctant to question AI initiatives. If the organization is moving aggressively on AI, it should be equally serious about protecting those who raise governance concerns about it.

Documentation and Escalation Still Matter

As with every other aspect of AI governance, culture and judgment must be integrated into the process. A company should document how AI-related concerns can be reported, how they are triaged, who reviews them, what escalation triggers apply, and how resolutions are tracked. Concerns about AI should not be dismissed as vague general complaints. They should be reviewable and analyzable over time.

This is essential not only for accountability but for learning. Patterns in employee concerns may reveal weaknesses in training, design, vendor management, access controls, or oversight. A single report may be an isolated event. Repeated concerns within a single function may point to a systemic governance problem. That is why speak-up is not just about receiving reports. It is about turning those reports into organizational intelligence.

The ECCP again offers a useful framework. It asks whether investigations are timely, whether root causes are examined, and whether lessons learned are fed back into the compliance program. AI governance should work the same way. A reported concern should not end with a narrow answer to the immediate complaint. It should prompt management to ask what the issue reveals about the broader governance environment.

Boards Must Model the Right Tone

This final point may be the most important. Culture is shaped by what leadership rewards, tolerates, and asks about. If the board only asks about AI efficiency, adoption, and speed, management will take the signal. If the board asks whether employees are raising concerns, whether human oversight is meaningful, whether managers are trained, and whether retaliation protections are working, management will take that signal as well.

For CCOs, this is a vital opportunity. The compliance function can help boards understand that governance is not only about structure and controls, but also about whether the organization has preserved the human capacity to question, escalate, and correct. In the AI context, that may be the most important governance capability of all.

Because in the end, even the most advanced system will not govern itself. An enterprise must govern it. That requires culture. It requires trust. It requires the courage to speak up. And it requires strong human judgment to look at an impressive output and still ask, “Is this right?”

The Human Side of Governance Is the Decisive Side

This final article brings the series back to a simple truth. AI governance is not only about what the company builds. It is about how the company behaves.

Boards may establish oversight. Management may create structures. Compliance may build controls. But if employees are not prepared to report concerns or exercise judgment, the organization will remain vulnerable. A strong AI governance program does not merely control the system. It empowers the people around the system to challenge it responsibly.

That is the human side of governance, and in many ways it is the decisive side. 

Categories
AI Today in 5

AI Today in 5: April 13, 2026, The AI Governance Framework Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Oracle brings storytelling to the heart of compliance with AI. (Yahoo!Finance)
  2. AI is bringing compliance to BioPharma. (PharmTech)
  3. Oracle brings AI agents to financial crime and compliance. (Financial IT)
  4. Building out your AI governance framework. (Bloomberg Law)
  5. AI developments finance pros should be tracking. (MIT)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Daily Compliance News

Daily Compliance News: April 13, 2026, The Giant Leap for Jargon Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • A giant leap for jargon. (FT)
  • CFTC wants no state regulation of Kalshi. (Reuters)
  • The gasoline godfather. (Bloomberg)
  • China targets middlemen in corruption crackdown. (SCMP)
Categories
FCPA Compliance Report

FCPA Compliance Report: Judicial Discretion, Sentencing Advocacy, and a Proactive Compliance Model: Joseph De Gregorio – Part 2

In this episode, Tom Fox welcomes former Wall Street trader Joseph De Gregorio, who was federally convicted and now applies a “compliance rebuild” methodology to demonstrate genuine remediation under legal scrutiny. This is Part 2 of a two-part podcast series.

In Part 2, we cover how federal judges exercise broad discretion despite sentencing guidelines and often form views before the court based on the pre-sentence report and sentencing memorandum, with probation officers’ impressions shaped by a detailed defendant letter and authentic allocution; judges emphasize post-offense conduct and may discount lawyer advocacy. Joseph then summarizes patterns from 400+ white-collar cases, arguing that structural failures precede cultural and operational failures, and introducing the “access to scrutiny ratio” as the most predictive risk indicator. He lists five warning signals: unscrutinized top performers, known but unmapped monitoring gaps, unmanaged performance pressure, quietly resolved senior incidents, and compensation rewarding results without method (noting DOJ’s September 2024 ECCP update). He outlines a proactive Compliance Rebuild approach using human failure audits, reverse access audits, directional speak-up analysis, and DOJ-aligned prosecution simulations.

Key highlights:

  • Pre-Sentence Reports Matter
  • Patterns Across 400 Cases
  • Five Compliance Warning Signals
  • Prosecution Simulation Stress Test
  • DOJ Evaluation Questions and Red Flags

Resources:

Joseph De Gregorio – Founder, JN Advisor™ Maximum Sentence Reduction – Minimum Time Served

📋 Initial Consultation: https://forms.gle/2fLczk7bbwM7KSaP6

Bloomberg Law Contributor: “How to Get a Judge to Reduce Your Client’s White-Collar Sentence” – Bloomberg Law 

Bloomberg Tax Contributor: Tax Fraud Sentencing Has a Gap Defense Attorneys Are Missing

Featured Expert: American Bar Association

Featured Sentencing Mitigation Expert: Law360

Featured Expert on Us Weekly with 5x Emmy Award Winning Journalist Kristin Thorne for her “Uncovered” Series Click Link For Full Video

https://www.usmagazine.com/crime-news/news/federal-sentencing-strategist-reveals-why-some-real-housewives-stars-commit-fraud/

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Interested in the intersection of Sherlock Holmes and modern compliance? Check out my latest book, The Game is Afoot in Compliance.

Categories
Blog

Preventing Strategy Outrunning Governance in AI

One of the clearest AI governance challenges facing companies today is not a failure of ambition. It is a failure of pacing. Put simply, strategy is moving faster than governance. Business teams want results. Senior executives hear daily about efficiency gains, lower costs, faster decision-making, enhanced customer engagement, and competitive advantage. Vendors are more than happy to promise it all. Employees are already experimenting with AI tools on their own. In that environment, the pressure to move quickly is relentless.

That is where the compliance function must step forward. Not to say no. Not to slow innovation for the sake of slowing it. But to ensure that innovation moves with structure, discipline, and accountability. Governance is not the enemy of AI strategy. Governance is what allows an AI strategy to scale without becoming an enterprise risk event.

The Central Question for Boards and CCOs

For boards, Chief Compliance Officers, and business leaders, the central question is straightforward: has the company defined the rules of the road before putting AI into production? If the answer is no, the company is already behind.

This is not a theoretical problem. It is happening every day. A business unit buys an AI-enabled tool before legal, compliance, IT, privacy, and security have reviewed it. A vendor pitches a product as low-risk automation, even though it actually makes consequential recommendations. An employee uploads sensitive data into a generative AI platform for convenience. A use case that began as internal support quietly migrates into customer-facing decision-making. A pilot project becomes business as usual without anyone documenting who approved it, what risks were considered, or what human oversight is supposed to look like.

That is what it means when strategy outruns governance. The business has a faster process for adopting AI than it has for understanding, controlling, and monitoring AI risk.

What the DOJ Expects

The Department of Justice has been telling compliance professionals for years that an effective compliance program must be dynamic, risk-based, and integrated into the business. That lesson applies directly here. Under the ECCP, prosecutors ask whether a company has identified and assessed its risk profile, whether policies and procedures are practical and accessible, whether responsibilities are clearly assigned, whether decisions are documented, and whether the program evolves as risks change. AI governance sits squarely in that framework.

What “Rules of the Road” Means in Practice

What do the “rules of the road” look like in practice?

First, the company must define which AI use cases are permissible. These are lower-risk applications that can be used within established controls. Think internal drafting support, workflow automation for non-sensitive administrative tasks, or summarization tools used on approved data sets. Even here, there should be basic conditions: approved tools only, no confidential data unless authorized, user training, logging, and manager accountability.

Second, the company must identify restricted or high-risk use cases. These are situations where AI may be allowed, but only after enhanced review. This can include uses involving personal data, HR decisions, customer communications, pricing, fraud detection, credit or eligibility decisions, compliance surveillance, or any function where bias, opacity, or error could create legal, regulatory, or reputational harm. These use cases should trigger a more formal process that includes a documented risk assessment, legal and compliance review, data governance checks, testing, defined human oversight, and ongoing monitoring.

Third, the company must be clear about prohibited use cases. If an AI application cannot be used consistently with the company’s values, control environment, legal obligations, or risk appetite, it should be off-limits. That might include tools that process sensitive data in unapproved environments, systems that make fully automated consequential decisions without human review, or applications that cannot be explained, tested, validated, or monitored sufficiently for their intended use.

Fourth, the company must establish escalation thresholds. Not every AI decision belongs at the board level, but some certainly do. Use cases involving strategic transformation, material legal exposure, major customer impact, significant third-party dependency, or high-consequence decision-making may need escalation to senior management, a designated AI or risk committee, or the board itself. If management cannot explain when a matter gets elevated, governance is too vague to be trusted.

Why the NIST AI RMF Matters

This is where the NIST Framework is so useful. NIST does not treat AI governance as a one-time signoff exercise. It organizes governance as an ongoing discipline through four connected functions: Govern, Map, Measure, and Manage. For compliance professionals, that is a practical operating model.

Governance means setting accountability, policies, oversight structures, and risk tolerances. It answers who is responsible, who decides, and what standards apply. A map means understanding the use case, context, stakeholders, data, and risks. It answers what the system is actually doing and where exposure lies. Measure means testing, validating, and assessing performance and controls. It answers whether the system works as intended and whether the company can prove it. Managing means acting on what is learned through oversight, remediation, change management, and continual improvement. It answers whether the company is prepared to respond when reality diverges from the plan.

How ISO 42001 Reinforces Governance Discipline

ISO 42001 reinforces the same message from a management systems perspective. It brings structure, accountability, controls, and continual improvement to AI governance. That matters because many organizations do not fail because of a lack of policy language. They fail because they do not operationalize accountability. ISO 42001 pushes companies to embed AI governance into defined processes, assign responsibilities, document controls, conduct internal reviews, and take corrective action. In other words, it turns aspiration into a management discipline.

What Happens When Strategy Outruns Governance

What happens when none of this is done well?

Shadow AI is usually the first warning sign. Employees use public or lightly reviewed tools because they are easy to use, fast, and readily available. Sensitive data may be entered without approval. Outputs may be used in business decisions without validation. The organization tells itself it is still in the experimentation phase, while the risk has already gone live.

Vendor-driven deployment is another danger. The company relies too heavily on what the vendor says the product can do and not enough on its own evaluation of what the product should do, how it works, what data it uses, and what controls are required. When something goes wrong, accountability becomes murky. Procurement says the business wanted speed. The business says IT approved the integration. IT says legal reviewed the contract. Legal says compliance owns the policy. Compliance says no one submitted the use case for formal review. That is not governance. That is institutional finger-pointing.

Undocumented approvals are equally dangerous. An AI tool is launched because everyone generally agrees it seems useful. But there is no record of the intended purpose, risk rating, required controls, human review standard, or approval rationale. Six months later, the company cannot explain why the system was deployed, what guardrails were put in place, or whether its use has drifted beyond its original scope.

The Compliance Mechanisms Companies Need Now

That is why companies need concrete compliance mechanisms now. They need an intake process for AI use cases to enter a formal review channel before deployment. They need risk tiering so not every use case gets the same treatment, but higher-risk applications receive enhanced scrutiny. They need approval workflows with defined roles for the business, legal, compliance, privacy, security, IT, and, where appropriate, model risk or internal audit. They need board reporting triggers to inform leadership when AI adoption crosses materiality or risk thresholds. They need a current model and use-case inventory so the company knows what is in operation. They need change management, so updates, retraining, vendor changes, and scope shifts are reviewed rather than assumed. And they need periodic review because AI risk does not stand still after launch.

The Special Role of Compliance

The compliance professional has a special role here. Compliance is often the function best positioned to connect governance, process, accountability, documentation, and escalation. That is precisely what the DOJ expects in an effective program. If the company can buy AI faster than it can classify risk, document controls, assign accountability, and test outcomes, the program is not keeping pace with the business. That gap will not stay theoretical for long. It will harden into enterprise risk.

Conclusion: Governance Must Keep Pace With Strategy

The lesson is direct. Strategy and governance must move together. AI governance is not a brake pedal. It is the steering system. A company that wants the benefits of AI must be disciplined enough to define where AI can go, where it cannot go, who decides, what gets documented, and when the business must stop and reassess. If the company can move faster on AI strategy than on AI governance, it is creating risk faster than it can manage. That is not innovation. That is exposure.

Categories
Sunday Book Review

Sunday Book Review: April 12, 2026, The Library of America for the Revolution Edition

In the Sunday Book Review, Tom Fox considers books that would interest compliance professionals, business executives, or anyone curious. It could be books about business, compliance, history, leadership, current events, or anything else that might interest Tom. In honor of the upcoming 250th anniversary of the US, in this episode, we look at 4 top books from the Library of America on contemporaneous writings on the American Revolution.

  1. Thomas Jefferson: Writings
  2. George Washington: Writings
  3. The American Revolution: Writings from the Pamphlet Debate
  4. The American Revolution: Writings from the War of Independence