Categories
Blog

Trust Is Not a Control: The Drop-In AI Audit

There is a hard truth at the center of modern AI governance that every compliance professional needs to confront: trust is not a control. For too long, organizations have approached AI oversight with a familiar but outdated mindset. They collect a vendor certification. They review a policy statement. They ask whether a third party is “aligned” with a recognized framework. Then they move on, assuming the governance box has been checked. In today’s enforcement and risk environment, that approach is no longer good enough.

The Department of Justice has repeatedly made this point in its Evaluation of Corporate Compliance Programs. The DOJ does not ask whether a company has a policy on paper. It asks whether the program is well designed, whether it is applied earnestly and in good faith, and, most importantly, whether it works in practice. That final phrase matters. Works in practice. It is the dividing line between performative governance and effective governance.

That is why every compliance program now needs a drop-in AI audit. It is not simply another diligence exercise. It is a mechanism for proving that governance is real. It is a practical third-party risk tool. And it is one of the clearest ways to operationalize the ECCP in the age of artificial intelligence.

The Problem: Third-Party AI Risk Is Moving Faster Than Oversight

Most companies do not build every AI capability internally. They rely on vendors, service providers, cloud platforms, embedded applications, analytics partners, and other third parties whose tools increasingly shape business processes and compliance outcomes. In many organizations, these third parties now influence investigations, due diligence, monitoring, onboarding, reporting, customer interactions, and internal decision-making. That creates a new class of third-party risk.

The problem is not only whether a vendor has responsible AI language in its contract or whether it can point to a certification. The problem is whether your organization can verify that the relevant controls are functioning as represented in the real-world use case affecting your business. That is where too many compliance programs still fall short.

Under the ECCP, the DOJ asks whether a company’s risk assessment is updated and informed by lessons learned. It asks whether the company has a process for managing risks presented by third parties. It asks whether controls have been tested, whether data is available to compliance personnel, and whether the company can demonstrate continuous improvement. These are not abstract questions. They go directly to how you oversee AI-enabled third parties. If your third-party AI governance begins and ends with a questionnaire and a PDF certification, you do not have evidence of governance. You have evidence of intake.

What a Drop-In Audit Really Does

A drop-in AI audit changes the question from “What does the third party say?” to “What can the third party prove?” That is a profound shift.

The value of the drop-in audit is that it brings compliance discipline directly into third-party AI oversight. Instead of accepting broad claims about safety, control, and alignment, you examine operational evidence. Instead of relying solely on design statements, you test for performance in practice. Instead of treating governance as a one-time approval event, treat it as a repeatable audit process. In that sense, the drop-in audit becomes proof of governance.

It also becomes a far more mature third-party risk tool. You are no longer merely assessing whether a vendor appears sophisticated. You are assessing whether a third party can withstand scrutiny on the questions that matter most: scope, controls, traceability, escalation, and evidence.

And from an ECCP perspective, that is precisely the point. The DOJ has emphasized that compliance programs must move beyond paper design into operational reality. A drop-in audit is one of the few mechanisms that let you do that in a disciplined, documentable way.

From Vendor Oversight to Third-Party Governance

This discipline should not be limited only to classic vendors. The better view is to expand the concept across all third parties that provide, influence, host, or materially shape AI-enabled services. That includes software providers, outsourced service partners, embedded AI functionality in enterprise tools, cloud-based analytics environments, compliance technology vendors, and any external party whose systems affect business-critical decisions or regulated processes.

Risk does not care about the label on the contract. If the third party’s AI affects your organization’s screening, monitoring, investigations, decision support, or disclosures, the compliance risk is real. Your governance process must be equally real. This is why “trust but verify” is no longer just a slogan. It is a design principle for third-party oversight of AI.

The Core Elements of the Drop-In Audit

A strong drop-in audit has three features: sampling, contradiction testing, and escalation.

1. Sampling: Evidence of Operation, Not Merely Design

Sampling is where governance becomes tangible. A company requests specific artifacts tied to actual use cases and actual control operations. This may include scope documents, Statements of Applicability, system documentation, training data summaries, access controls, incident records, runtime logs, or evidence of human review. The point is simple: operational evidence is what matters.

This is where a compliance function moves from hearing about controls to seeing them in action. It is also where internal audit can add real value by testing whether the evidence supports the stated control environment.

2. Contradiction Testing: Where Real Risk Emerges

This is one of the most important and underused concepts in third-party AI oversight. Inconsistencies between claims and reality are where governance failures emerge. If a third party says its certification covers a given service, does the scope document confirm it? If it claims strong incident response, does the record back it up? If it represents strong human oversight, do the runtime traces show meaningful intervention or only theoretical review points?

Contradiction testing is powerful because it goes to credibility. It tests whether the governance narrative matches the operating reality. Under the ECCP, that is exactly the kind of inquiry prosecutors and regulators will care about. It speaks to effectiveness, honesty, and control discipline.

3. Escalation: Governance in Action

Governance without consequences is not governance. A drop-in audit must include clear escalation triggers. Missing evidence, mismatched certification scope, unexplained gaps, unresolved incidents, or inconsistent remediation should not be noted in isolation. They should trigger action.

That action may include enhanced diligence, contractual remediation, independent validation, temporary use restrictions, or deeper audit review. The important point is that the program responds. This is where the drop-in audit becomes operationalizing the ECCP. It demonstrates that the company not only identifies risk but also acts on it.

How the Drop-In Audit Maps to the ECCP

The drop-in audit aligns tightly with the DOJ’s framework for an effective compliance program. Risk assessment is addressed because the audit focuses attention on where AI-enabled third parties create actual operational and control exposure. Policies and procedures are tested because the company does not merely accept them at face value. It assesses whether the stated controls are supported by evidence. Third-party management is strengthened by making oversight continuous, risk-based, and verifiable. Testing and continuous improvement are built into the audit process, which identifies gaps, contradictions, and corrective actions. Investigation and remediation principles are reinforced by documenting, escalating, and using findings to improve the control environment.

Most importantly, the audit answers the ECCP’s central practical question: Does the program work in practice?

How the Drop-In Audit Maps to NIST AI RMF

The NIST AI Risk Management Framework provides a highly useful structure for the drop-in audit, especially through its Govern, Map, Measure, and Manage functions.

  1. Governance is reflected in defined ownership, accountability, and escalation when issues are identified.
  2. A map is reflected in understanding the third party’s actual AI use case, scope, dependencies, and business impact.
  3. The measure is reflected in the use of evidence, runtime observations, contradiction testing, and performance assessment.
  4. Management is reflected in remediation, ongoing oversight, and updates to controls based on audit findings.

In this way, the drop-in audit becomes a practical tool for taking the NIST AI RMF from concept to execution.

How the Drop-In Audit Maps to ISO/IEC 42001

ISO/IEC 42001 adds the management-system discipline that compliance programs need. Its value lies in documented scope, role clarity, control applicability, monitoring, corrective action, and continual improvement. A drop-in audit fits naturally into that structure because it tests whether those elements are visible in operation, not merely stated in documentation.

The Statement of Applicability becomes meaningful when the company verifies that the controls identified there actually correspond to the deployed service. Monitoring becomes meaningful when evidence is examined. Corrective action becomes meaningful when gaps trigger follow-up. Continual improvement becomes meaningful when findings are fed back into governance. That is why the documentation you generate should serve your board, regulators, and internal stakeholders without additional work. Producing evidence that travel is one of the most strategic benefits of this approach.

Why Every Compliance Program Needs This Now

The strategic payoff is straightforward. Strong AI governance is not a drag on innovation. It is what allows innovation to scale with trust. A drop-in audit gives compliance and internal audit a mechanism to test what matters, document their findings, and create evidence that withstands scrutiny. It moves governance from assertion to proof. It transforms third-party diligence into a repeatable, auditable process. It helps ensure that when regulators, boards, or business leaders ask how the company knows its third-party AI governance is working, there is a real answer.

Because, in the end, evidence of governance matters. Not narratives. Not slide decks. Evidence. President Reagan was right in the 1980s, and he is still right today: “Trust but verify.”

Categories
Daily Compliance News

Daily Compliance News: April 21, 2026, The Scambodia Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Pope Leo calls on Angolans to fight corruption. (Africa News)
  • Should CEOs be the face of a company? (NYT)
  • Cambodia’s business model is scamming. (WSJ)
  • SCt to review SEC disgorgement powers. (Reuters)

Interested in attending Compliance Week 2026? Click here for information and Registration. Listeners to this podcast receive a 20% discount on the event. Use the Registration Code TOMFOX 20

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
AI Today in 5

AI Today in 5: April 21, 2026, The 7 Questions You Should Ask Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. 7 questions to ask about AI and compliance. (The News Tribune)
  2. Compliance can outsource tools to AI but not judgment. (FinTech Global)
  3. Data Authenticity and Accountability for AI. (CCI)
  4. Do AI chatbots make you stupider? (BBC)
  5. ICU nurses get AI help. (HealthcareItNews)

Interested in attending Compliance Week 2026? Click here for information and Registration. Listeners to this podcast receive a 20% discount on the event. Use the Registration Code TOMFOX20

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Innovation in Compliance

Innovation in Compliance: When a Senior Leader Faces Cancer: Disclosure, Continuity Planning, and Resilience with Deb Krier

Innovation comes in many areas, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, Tom visits Deb Krier to discuss her work coaching primarily executives after serious cancer diagnoses.

Deb discusses the unique leadership challenges of privacy, disclosure, and maintaining credibility while undergoing treatment. Deb, a corporate communications professional and founder of Wise Women Communications, discusses what leaders should share with boards, HR, close colleagues, and clients, emphasizing the importance of controlling the narrative to prevent rumors and coordinating with medical teams to plan around energy levels, treatment, and time away. She describes resilience as “grit,” encourages leaders to delegate and empower teams, and urges organizations to strengthen business continuity and contingency planning so no single person holds ultimate authority. Deb highlights the importance of a support “tribe,” the benefits of humor, and advises compliance professionals to listen with empathy while addressing any legal disclosure obligations.

Key highlights:

  • Cancer Coaching for Executives
  • Work Impact and Treatment Planning
  • Resilient Leadership in Crisis
  • Support Tribe and Community
  • Humor as Medicine
  • Compliance, Empathy, and Culture

Resources:

Deb Krier on LinkedIn

Your Cancer Coach Website 

The Business Power Hour Podcast

Innovation in Compliance is a multi-award-winning podcast that was recently ranked Number 4 in Risk Management by 1,000,000 Podcasts.

Categories
SBR - Authors' Podcast

SBR-Author’s Podcast: Invitational Selling: Building Trust, Engagement, and Human Connection in a Digital World with Dr. Dennis Cummins

Welcome to the SBR-Author’s Podcast! In this podcast series, Host Tom Fox visits with authors in the compliance arena and beyond. In this episode, Tom Fox welcomes Dr. Dennis Cummins to talk about his new book “Invitational Selling: The Human Connection Advantage.”

Dr. Cummins to discuss his new book, which grew from his experience “selling from the stage” and from learning that pressing less and connecting more led to better results. Dr. Cummins argues that traditional high-pressure sales tactics are failing because buyers have more information, face constant messaging, and are increasingly skeptical, while AI-driven speed and automation can erode authenticity and trust. He defines invitational selling as a three-phase framework: connect, convey, and convert by inviting next steps. This is usable not only for products but for leaders seeking organizational buy-in, speak-up/listen-up cultures, and engagement that reduces resistance and turnover. He shares a story about his late daughter, Lauren, selling bracelets as a lesson in rapport, value, and meaning. The book launches April 28, with launch proceeds donated to Make-A-Wish Foundation.

Key highlights:

  • Why Write This Book
  • Connect Convey Convert
  • Beyond Sales Organizational – Buy In
  • Speak Up Listen Up Culture
  • Inviting Beats Telling
  • Using AI Without Losing Trust

Resources:

Dr. Dennis Cummins on LinkedIn

Dr. Dennis Cummins Website

Invitational Selling click here

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
The PfBCon Podcast

The PFBCon Podcast: From Podcast to Book (and Back): Building a Content Engine with AI Support

Tom Fox explains how podcasts and books can fuel each other in a circular content-creation process and how AI can assist as a research and editorial assistant. Drawing on experience founding the Compliance Podcast Network (growing from five to 75 shows) and the Texas Hill Country Podcast Network (from three to 15 shows, with 3,000 subscribers), they emphasize the value of podcasting, especially in rural areas. Examples include creating a podcast that led to a book about seven older female artists, producing a leader’s autobiography by recording outlined life stories into transcripts (earning a gold podcast award), and turning books into podcast series (e.g., risk management/AI, FCPA Survival Guide, and Megan Dougherty’s Podcasting for Business to support her launch). They describe writing The Compliance Handbook via daily podcast recordings as an editing tool and using AI prompts to generate topic ideas, guest outlines, blog drafts, white paper drafts, chapter drafts, and social post drafts—always requiring human fact-checking and editing.

Key highlights:

  • Podcast Book Loop
  • Building Podcast Networks
  • Magnificent Seven Story
  • Legacy Autobiography Podcast
  • Upping Your Game Brand
  • FCPA Book to Podcast
  • Podcasting for Business Launch
  • Compliance Handbook Method
  • Write What You Love
  • AI Research Editorial Tools

Resources:

Follow Tom Fox on:

Instagram

Facebook

YouTube

Twitter

LinkedIn

Compliance Podcast Network

Texas Hill Country Podcast Network

Categories
Blog

AI Disclosures, Controls, and D&O Coverage: Closing the Governance Gap Around Artificial Intelligence

A new governance gap is emerging around artificial intelligence, and it is one that Chief Compliance Officers, compliance professionals, and boards need to confront now. It sits at the intersection of three areas that too many companies still treat separately: public disclosures, internal controls, and insurance coverage. That siloed approach is no longer sustainable.

As companies speak more confidently about their AI strategies, insurers are becoming more cautious about the risks those strategies create. That tension matters. It signals that the market is beginning to see something many organizations have not yet fully addressed: when a company’s statements about AI outpace its actual governance, the exposure is not merely operational or reputational. It can become a disclosure issue, a board oversight issue, and ultimately a proof-of-governance issue under the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP).

For the compliance professional, this is not simply an insurance story. It is a compliance integration story. The question is whether the company can align its statements about AI, the controls it has in place, and the protections it believes it has in place if something goes wrong.

The New Governance Gap

Many organizations are eager to describe AI as a source of innovation, efficiency, better decision-making, or competitive advantage. Those messages increasingly appear in earnings calls, investor decks, public filings, marketing materials, and board presentations. Yet the underlying governance structures often remain immature. That disconnect is the governance gap.

It appears when management speaks broadly about responsible AI but has not built a complete inventory of AI use cases. It appears when companies discuss oversight but cannot show testing, documentation, or monitoring. It appears that boards assume that insurance will respond to AI-related claims without understanding how new policy language may narrow coverage.

This is where D&O coverage becomes so important. It is not the center of the story, but it is a revealing signal. If insurers are revisiting policy language and introducing exclusions or limitations tied to AI-related conduct, it suggests the market sees governance risk. In other words, the insurance market is sending a message: AI-related claims are no longer hypothetical, and companies that cannot demonstrate disciplined oversight may find that risk transfer is less available than they assumed.

Why the ECCP Should Be the Primary Lens

The DOJ’s ECCP remains the most useful framework for analyzing this issue because it asks exactly the right questions.

Has the company conducted a risk assessment that accounts for emerging risks? Are policies and procedures aligned with actual business practice? Are controls working in practice? Is there proper oversight, accountability, and continuous improvement? Can the company demonstrate all of this with evidence? Those are compliance questions, but they are also the right AI governance questions.

If a company makes public statements about AI capability, oversight, or reliability, the ECCP lens requires more than aspiration. It requires substantiation. Can the company show who owns the AI risk? Can it demonstrate how models or systems are tested? Can it show escalation procedures when problems arise? Can it document how AI-related decisions are monitored, reviewed, and improved over time?

If the answer is no, then the issue is not simply that the company may have overpromised. The issue is that its compliance program may not be adequately addressing a material emerging risk. That is why CCOs should view AI as a cross-functional challenge requiring integration across legal, compliance, technology, risk, audit, investor relations, and the board.

AI Disclosure Must Be Evidence-Based

One of the most practical steps a compliance function can take is to push for an evidence-based disclosure process around AI. This means that public statements about AI should not be driven solely by enthusiasm, market pressure, or executive optimism. They should be grounded in underlying documentation. If the company says it uses AI responsibly, where is the governance framework? If it claims AI improves decision-making, what testing supports that assertion? If it says it has safeguards, where are the control descriptions, monitoring results, and escalation records?

This is not about suppressing innovation. It is about ensuring that disclosure discipline keeps pace with technological ambition. For boards, this means asking harder questions before approving or relying on public AI narratives. For compliance officers, it means helping management build the evidentiary record that turns broad statements into defensible representations.

Controls Must Catch Up to Strategy

This is where the “how-to” work begins. Compliance professionals should begin by creating a structured inventory of AI use cases across the enterprise. That inventory should identify where AI is being used, what decisions it informs, what data it relies on, who owns it, and what risks it entails.

Once that inventory exists, risk tiering should follow. Not every AI use case carries the same compliance significance. A low-risk productivity tool does not need the same oversight as a system that affects investigations, third-party due diligence, customer interactions, financial reporting, or core operational decisions.

From there, the company can design controls proportionate to risk. High-impact uses of AI should have documented governance, human review where appropriate, testing protocols, escalation triggers, and monitoring requirements. The compliance team should be able to answer a simple question: where are the controls, and how do we know they work? That is the heart of the ECCP inquiry.

Where NIST AI RMF and ISO/IEC 42001 Fit

This is also where the NIST AI Risk Management Framework and ISO/IEC 42001 become highly practical tools. NIST AI RMF helps organizations govern, map, measure, and manage AI risks. For compliance professionals, this provides a disciplined structure for identifying AI use cases, understanding impacts, assessing reliability, and managing response. It is especially useful in linking abstract AI risk to operational decision-making.

ISO/IEC 42001 brings management system discipline to AI governance. It focuses on defined roles, documented processes, control implementation, monitoring, internal review, and continual improvement. That makes it an excellent bridge between policy and execution. Together, these frameworks help operationalize the ECCP. The ECCP tells you what an effective compliance program should be able to demonstrate. NIST AI RMF helps structure the risk analysis. ISO 42001 helps embed those requirements into a repeatable governance process.

For CCOs, the practical lesson is clear: use these frameworks not as academic overlays, but as working tools to build ownership, documentation, testing, and accountability.

Insurance Is a Governance Input

Companies also need to stop treating insurance as an afterthought. D&O coverage should be considered a governance input, not merely a downstream purchase. If policy language is narrowing around AI-related claims, boards and compliance leaders need to understand what that means. What scenarios might raise disclosure-related allegations? Where is ambiguity in coverage? What assumptions has management made about protection that may no longer hold?

Compliance does not need to become an insurance specialist. But it does need to ensure that disclosure, governance, and risk transfer are aligned. If the company is making strong public claims about AI while carrying unexamined governance weaknesses and uncertain coverage, that is precisely the kind of mismatch that can trigger a crisis.

Closing the Gap Before It Becomes a Failure

The larger lesson is straightforward. AI governance is not simply about technology controls. It is about integration. It is about ensuring that what the company says, what it does, and what it can prove all line up. That is why the governance gap matters so much. It is the space where strategy outruns structure, where disclosure outruns evidence, and where confidence outruns control. For boards and compliance professionals, the task is to close that gap before it becomes a failure.

The companies that do this well will not necessarily be the ones moving the fastest. They will be the ones building documented, tested, monitored, and governed AI programs that stand up to regulatory scrutiny, investor pressure, and real-world disruption. That is not bureaucracy. That is the price of sustainable innovation.

Categories
AI Today in 5

AI Today in 5: April 20, 2026, The Jassy’s Rules for AI and FinTech Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Agentic AI demands new cyber protections. (CX Today)
  2. Top markets for AI-driven AML compliance. (FinTech Global)
  3. Legal AI depends on trust, authoritative content, and workflows. (Wolters Kluwer)
  4. AI is reshaping medical device compliance. (Today’s Medical Developments)
  5. Jassy’s rules for AI fintech. (FinTech Magazine)

Interested in attending Compliance Week 2026? Click here for information and Registration. Listeners to this podcast receive a 20% discount on the event. Use the Registration Code TOMFOX 20

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Daily Compliance News

Daily Compliance News: April 20, 2026, The ABC is Good Politics Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Anti-bribery isn’t just good business, it’s good politics. (TNR)
  • The bears ate my car. (NYT)
  • TACO caves in on Anthropic. (WSJ)
  • Deutsche Bank reports more potential Russian sanction violations. (FT)

Interested in attending Compliance Week 2026? Click here for information and Registration. Listeners to this podcast receive a 20% discount on the event. Use the Registration Code TOMFOX 20

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
FCPA Compliance Report

FCPA Compliance Report: Vince Walden on AI, Digital Assistants, and ROI at Compliance Week 2026

In this episode, Tom Fox welcomes Vince Walden, President of konaAI, to discuss his two panels at Compliance Week 2026 and the state of AI in compliance.

For the panel on AI and the compliance workforce, Vince argues jobs are generally safe because AI is best deployed as “digital assistants” (not digital employees) that handle repetitive tasks like data pulls and third-party due diligence, while keeping the “expert in the loop,” and he plans to show real use-case examples. For the ROI panel, Vince and co-panelists will discuss measuring impact through productivity gains, cost savings, faster turnaround for due diligence, and expanded compliance capabilities such as culture assessments, training, and transaction monitoring. Vince also links AI analytics to detecting fraud, waste, and abuse, citing a potential $35 million vendor abuse recovery, and explains why Compliance Week remains a top conference for regulator and peer benchmarking.

Key highlights:

  • AI Workforce
  • Digital Assistants in Action
  • Measuring Compliance ROI
  • Fraud Waste Abuse
  • Affordable Analytics Wins
  • Why Attend Compliance Week

Resources:

Vince Walden on LinkedIn

konaAI

Compliance Week 2026, click here for information and Registration

Listeners to this podcast receive a 20% discount on the event. Use the Registration Code TOMFOX 20

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.