Categories
Blog

Embedded Explainability: Turning Principles into Proof

Embedded explainability is the design choice to build “the why” directly into a system as it operates, rather than bolting on an explanation after the fact. In practical terms, it means the model or decision engine is instrumented to surface the key factors that drove a specific output as the output is delivered. In a compliance, risk, or fraud context, this can include reason codes tied to specific data features, a clear confidence score, the policy or control implicated, and a short narrative that translates technical drivers into business language. The point is not to turn every decision into a science project; the point is to make explanations an always-on product requirement, so investigators, managers, and auditors can quickly understand what the system saw, why it escalated, and what evidence supports the action.

Where this becomes powerful is in governance. Embedded explainability creates a durable audit trail and makes accountability real: you can test whether explanations are consistent over time, whether they drift, whether similarly situated cases are treated consistently, and whether the system is relying on inappropriate proxies. It also reduces the “black box” tax during exams and internal reviews because your documentation is generated continuously, decision by decision, rather than recreated under a deadline. Done well, embedded explainability supports model risk management, accelerates case resolution, and builds user trust because the system does not just tell you what to do. It shows its work in a way that is usable for first-line teams and defensible for second-line and regulators.

If you have been in a single AI governance meeting, you have heard the same reassuring words: transparency, fairness, accountability. They sound good. They also do not answer the one question your Audit Committee will ask you the minute something goes sideways: can you prove what happened, who approved it, and why the system did what it did?

That is the heart of embedded explainability for a GRC or compliance professional. It is not a debate about data science. It is about building a program that can withstand scrutiny. In a strong compliance program, “principles” are not controls. They are intentions. Regulators, prosecutors, and auditors do not award credit for intent. They want evidence of implementation and effectiveness. When you embed explainability, you are building evidence into the workflow itself, so the program produces audit-ready artifacts without heroics.

Think like an auditor, not like a vendor.

In many organizations, “explainability” is treated like a technical deliverable. Someone pulls a chart. Someone cites an algorithm. Everyone nods. Then, the internal audit asks a simple question: “Show me how this use case was approved, how risks were assessed, how testing was performed, and how you monitor it today.”

That is where compliance needs to reframe the conversation. For GRC, the most important explainability is process explainability:

  • Who approved the use case, and what decision impact does it have?
  • What risks were identified, and what mitigations were required?
  • What data and content sources were used, and how they are governed.
  • What testing was done, what thresholds were applied, and what failed.
  • Who monitors the system in production, and how issues get escalated.
  • How changes are controlled, logged, and reapproved

If you can answer those questions with documentation, you can pull on demand; you are not “talking about explainability.” You are demonstrating it.

The risk that hides in plain sight: language and cultural bias

Most compliance teams understand bias as a broad concept. The operational problem manifests in a narrower, more painful way: language and cultural bias within everyday compliance workflows. Consider the real-life places your organization may be using AI or analytics: hotline intake, investigations triage, monitoring and surveillance, third-party diligence, audit planning, policy interpretation, and case summarization. Now add the facts of corporate life: multilingual reporting, non-native English narratives, regional idioms, and different cultural communication styles.

Here is the compliance risk: the system may not be “biased” in a headline-grabbing way. It may be biased in a quiet, compounding way:

  • A hotline narrative written in non-native English is scored lower for credibility.
  • Regional phrasing triggers false positives in monitoring.
  • Direct communication styles are interpreted as “aggressive” or “retaliatory”;
  • Reports from certain geographies are deprioritized because of linguistic patterns; and
  • Summaries strip context from culturally specific descriptions of harm.

This is why embedded explainability matters. If the system cannot tell you why it scored and routed a case the way it did, you will not find these problems until someone outside the company points them out to you.

A compliance-led lifecycle that makes explainability real

The practical move is to treat embedded explainability as a lifecycle requirement, not a go-live checkbox. You want stage gates with documented approvals and an evidence pack that travels with the use case from intake to monitoring. Think of it as the same discipline you already apply to third parties, controls testing, and investigations: define, document, test, approve, monitor, and improve.

A simple compliance-led lifecycle looks like this:

  1. Intake and approval: What is the use case, what is the decision impact, and who is accountable?
  2. Data and language risk assessment: What data is used, what languages and regions are in scope, and what bias risks exist?
  3. Build with traceability: Document the logic, rules, prompts, and human review points.
  4. Testing: Prove the system can be reconstructed and does not degrade across language groups.
  5. Deployment readiness: Confirm monitoring, access controls, logging, and escalation are active.
  6. Ongoing monitoring: Report drift, exceptions, overrides, and bias findings; reapprove material changes.

This is the compliance function earning its keep; not by arguing about definitions, but by building a governance machine that produces defensible evidence.

The minimum evidence pack: what you should be able to pull on demand

If you want to operationalize embedded explainability, standardize the artifacts. Do not let every team reinvent documentation. Your minimum evidence pack should be consistent across machine learning models, rules-based analytics, LLM workflows, and decision engines.

At a minimum, you should be able to produce:

  • Use case charter: purpose, scope, decision impact, owner, risk tier, approvals;
  • Data and language risk assessment: sources, language coverage, cultural risk factors, mitigations;
  • System specification: what it is, how it works, where humans intervene;
  • Testing artifacts: bias test plan, scenario tests, results, remediation notes;
  • Explainability checklist: proof you can reconstruct inputs, steps, outputs, and rationale;
  • Deployment approval record: stage-gate sign-offs and dates;
  • Monitoring and drift reports: trends, exceptions, and escalation notes;
  • Incident and escalation log: root cause, corrective actions, closure dates, and
  • Change management log: what changed, materiality, retesting, reapproval.

If you have this, you have something most organizations still lack: a system of record for AI governance that internal and external auditors can actually test.

The Bottom Line

Embedded explainability is how you turn AI governance from a values statement into a control environment. It is how you protect innovation by making it defensible. If your program can reconstruct decisions, show approvals, demonstrate testing, and document monitoring, you are not hoping you are compliant. You are ready to prove it. 

Categories
Daily Compliance News

Daily Compliance News: February 18, 2026, The Stupid Is as Stupid Does Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Just how big is Ukraine’s corruption problem? (TheIndependent)
  • HB-1 visas and GOP racial hatred. (NYT)
  • More energy investments in Venezuela. (WSJ)
  • The Trump Administration wants history and science removed from federal parks. (Reuters)
Categories
AI Today in 5

AI Today in 5: February 18, 2026, The AI for Rural Healthcare Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. AI to transform fraud investigations. (PRNewswire)
  2. Better defensible AI oversight. (PRNewswire)
  3. What’s in your compliance gap? (Forbes)
  4. Is the AI moment here? (FRSF)
  5. Oz wants AI avatars for rural healthcare. (NPR)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Compliance Into the Weeds

Compliance into the Weeds: Truth Stranger the Fiction: Binance, Iran, Crypto and Compliance

The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to explore it more fully. Looking for some hard-hitting insights on compliance? Look no further than Compliance into the Weeds! In this episode of Compliance into the Weeds, Tom Fox and Matt Kelly look at recent reporting on Binance that raises questions about the effectiveness of its compliance program, monitorships, and executive attitudes toward compliance.

They recap Binance’s 2023 resolution of U.S. criminal and civil matters involving money laundering and sanctions evasion. They discuss the Fortune article, which reported that Binance continued to route funds through its platform to the Iranian government in 2024 and into 2025. They highlight Mr. Zou’s public response on X, suggesting that if investigators found misconduct, it implied they failed to prevent it, which the hosts criticize as a misunderstanding that business units own risk and that compliance’s role is to provide systems, channels, oversight, and escalation rather than “prevent” all misconduct.

Key highlights:

  • Truth Stranger Than Fiction in Compliance
  • Binance’s 2023 Guilty Plea, $4.3B Penalty & Two Monitorships
  • Compliance Team Fallout: Investigators Fired & CCO on the Move
  • ‘If You Found It, You Failed’: Why CEOs Misunderstand Compliance
  • Iran as the Red Line: Plea Agreement Breach, Politics, and Corruption Risk
  • Will Anyone Enforce This? Rule of Law Questions and What Comes Next

Resources:

Matt in Radical Compliance

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

A multi-award-winning podcast, Compliance into the Weeds was most recently honored as one of the Top 25 Regulatory Compliance Podcasts, a Top 10 Business Law Podcast, and a Top 12 Risk Management Podcast. Compliance into the Weeds has been conferred a Davey, a Communicator Award, and a W3 Award, all for podcast excellence.

Categories
The Hill Country Podcast

The Hill Country Podcast: Greg Faldyn: Leadership and Legacy in Rotary

Welcome to the award-winning The Hill Country Podcast. The Texas Hill Country is one of the most beautiful places on earth. In this podcast, Hill Country resident Tom Fox visits with the people and organizations that make this one of the most unique areas of Texas. In this episode, host Tom Fox speaks with Greg Faldyn, a seasoned insurance industry professional and a long-time Rotarian.

Greg, an insurance professional with over 40 years of experience and a dedicated Rotary Club member for nearly 25 years, views the 100th anniversary of Rotary in Kerrville as a landmark achievement in the organization’s enduring commitment to community service. Having played a pivotal role in organizing the celebration as the foundation chair, Greg has been instrumental in highlighting Rotary’s century-long partnerships with key local organizations, such as the Peterson Foundation and the Raphael Clinic. He proudly points to the Hill Country community’s collective resilience, particularly in the wake of events like the July 4th flood, as a testament to Rotary’s strength and impact. Passionate about engaging young professionals, Greg believes that the milestone anniversary serves not only as a celebration of past achievements but also as a call to future service and community enhancement.

Highlights include:

  • Rotary’s Centennial Celebration in Kerrville’s Community
  • Community Support through Rotary Foundation Grants
  • Rotary Club Weekly Engagement
  • Why Join Rotary?

Resources:

Rotary Club of Kerrville

Rotary District 5840

Rotary International

 Other Hill Country Focused Podcasts

Hill Country Authors Podcast

Hill Country Artists Podcast

Texas Hill Country Podcast Network

Cover Art

Nancy Huffman

Categories
Great Women in Compliance

Great Women in Compliance: The New Architecture of Legal and Compliance with AI

In this episode of Great Women in Compliance, Dr. Hemma R. Lomax speaks with Sam Flynn, co-founder of Josef, about the transformation of legal and compliance functions through technology. They discuss the importance of human-centered design, the role of AI in legal architecture, and the need for trust in AI tools. Sam shares his journey from creating Myki Fines to building self-service legal solutions that bridge the access-to-justice gap. The conversation emphasizes the importance of user experience, governance practices, and the need to rethink traditional professional roles in the legal field.

Takeaways:

  • Legal and compliance functions must evolve to be more human-centered.
  • AI can significantly enhance legal decision-making processes.
  • Trust in technology is crucial for successful implementation.
  • User experience should be prioritized in legal tech solutions.
  • Automation can free up valuable time for legal professionals.
  • Access to justice is a critical issue that can be addressed with technology.
  • Rethinking traditional roles in law can lead to better outcomes.
  • Data-driven insights can improve compliance practices.
  • Collaboration between experts and end-users is essential for success.
  • Legal technology should focus on delivering real value to users.

Sound Bites:

  • “AI should unleash human potential.”
  • “Trust is the key to unlocking value.”
  • “We need to build trust in our technology.”

Chapters:

00:00 Introduction to Legal Transformation

02:32 The Journey of Sam Flynn and Mickey Finds

05:30 Rethinking Legal Systems and Design

08:10 Substance Over Form in Legal Processes

10:56 The Role of AI in Legal Architecture

13:39 Building a Legal Front Door

16:24 User Experience in Compliance

18:54 Engagement and Data Utilization

21:56 The Future of Legal Workflows

24:29 Deciding Between Automation and Human Input

26:56 Navigating High-Risk Inquiries

27:50 Strategic Automation for Stakeholder Engagement

28:58 The Importance of Human Expertise in AI

30:57 Transforming Fear into Opportunity with AI

32:59 Building Trustworthy AI in Legal Settings

36:56 Governance Practices for AI Deployment

43:51 Access to Justice: Bridging Gaps with Technology

Guest Biography:

Sam Flynn is the Co-Founder and Chief Operating Officer of Josef, a legal automation platform that empowers legal and compliance teams to create reliable, self-serve tools — no coding required. In his role, Sam leads Josef’s business operations, governance, marketing, and customer success functions, scaling both product impact and organizational trust.

An ex-BigLaw litigator and experienced legal technologist, Sam has long been passionate about using technology to bridge the access-to-justice gap and elevate the delivery of legal services. In 2016, he built Myki Fines, a public-facing legal tech solution that attracted more than 60,000 users in its first month and helped catalyze reforms to unfair laws.

At Josef, Sam combines legal expertise with product and operational leadership to help teams rethink how legal and compliance work gets done — shifting from inbox-driven bottlenecks to strategic architectures that deliver decision-useful guidance at scale. He is a frequent speaker on generative AI in legal, a board member of the Center for Legal Innovation, and an advocate for human-centered legal design.

Categories
Blog

AI and Work Intensification – The Compliance Response

There is a comforting myth circulating in corporate hallways and boardrooms: if we deploy AI across governance, risk, and compliance, the work will shrink. Investigations will move faster. Monitoring will get smarter. Policies will draft themselves. Third-party diligence will become push-button. The compliance function will finally “do more with less.” That myth was challenged in a recent Harvard Business Review article, “AI Doesn’t Reduce Work—It Intensifies It by Aruna Ranganathan and Xingqi Maggie Ye.

The authors believe that what happens is work intensification. AI expands throughput, increases expectations, and generates more outputs that still require human judgment, verification, and accountability. Instead of fewer tasks, you get more tasks. Instead of simpler work, you get faster cycles, more iterations, and new forms of quality risk. For the Chief Compliance Officer (CCO) leading AI governance, this is not a side effect. It is a core operating model issue.

If compliance owns AI governance across the enterprise, compliance must also own the discipline of how humans and AI work together. I call that discipline an AI practice standard, management guidance that sets expectations for pace, quality, verification, escalation, and sustainable workload.

Today, we consider how to consider this issue as a compliance operating model challenge across all GRC workflows: policy management, investigations, hotline intake, monitoring and surveillance, third-party due diligence, regulatory change management, audit planning, training, and reporting. The tone is cautionary because the risk is real: a compliance function that mistakes AI output volume for compliance effectiveness.

The Compliance Operating Model Problem: More Output, More Review, More Risk

Compliance work is not manufacturing. It is judgment work. It requires discretion, context, and defensible decisions. AI can accelerate inputs and draft outputs, but it does not accept responsibility. The CCO does. The business does. The board does. When AI enters GRC workflows, it tends to create four pressure points:

1. Compression of timelines. If a draft can be produced in five minutes, someone will ask why it cannot be finalized in five more.

2. Explosion of options. AI generates multiple versions, scenarios, and recommendations, which expands decision load and review cycles.

3. Higher volume of “signals.” AI-enabled monitoring produces more alerts, more pattern matches, and more anomalies. Much will be noise. All require triage.

4. Illusion of completion. Teams begin to treat a plausible AI answer as a finished work product. That is how quality defects are born.

The result is a compliance function that looks “faster” while becoming more fragile. Burnout rises. Rework increases. Errors creep into documentation. Controls become less reliable because the humans operating them are overwhelmed by the sheer volume AI makes possible.

All this means the question for the CCO is not, “How do we roll out AI?” The question is, “How do we govern the human work that AI intensifies?”

Five KPIs for Work Intensification Risk

Next, we consider five KPIs specifically designed to measure work intensification. These are board-credible, compliance-owned, and operationally measurable.

1. After-Hours Compliance Work Index

Percentage of compliance work activity occurring outside standard business hours (for example, 6 p.m. to 7 a.m.), measured across key systems (case management, GRC platform activity logs, email metadata, collaboration tool usage). This matters because AI compresses timelines and pushes work into nights and weekends. This index serves as an early warning for burnout and quality failures.

2. AI Rework Rate

Percentage of AI-assisted work products requiring material revision after human review (policies, investigation summaries, risk narratives, diligence reports). This matters because

if AI increases speed but doubles rework, you are not gaining productivity. You are shifting effort downstream.

3. Cycle Time Compression vs. Quality Defect Ratio

Track cycle time reductions alongside quality defects (corrections, escalations, documentation gaps, audit findings). You can express this KPI as Cycle Time Improvement / Defect Increase.

This matters because faster is not better if defects rise. This ratio keeps leadership honest.

4. Alert-to-Action Conversion Rate

Percentage of AI-generated alerts that result in a confirmed issue, investigation, remediation, or control enhancement. This matters because AI intensifies monitoring. This KPI exposes whether you are drowning in noise or generating actionable intelligence.

5. Burnout Signal Composite

A quarterly composite score built from pulse surveys such as fatigue, workload, autonomy, attrition in compliance roles, sick leave usage trends, and employee assistance program utilization patterns. This matters because compliance effectiveness depends on people. Burnout is a control failure risk.

These five metrics give the CCO and board a shared view of whether AI is improving the compliance function or simply accelerating it toward exhaustion.

How to Measure the Leading Indicators

You requested practical recommendations for measuring after-hours work, cycle time, quality defects, and burnout indicators. Here is a measurement approach that is realistic and defensible.

After-Hours Work

  • Use system log data from the case management, GRC, and document management platforms to track timestamped activity.
  • Supplement with email and collaboration metadata to measure volume outside standard hours.
  • Report trends by team and workflow, not individuals. This is about operating model health, not surveillance.

Cycle Time

  • Establish “start” and “stop” definitions for each workflow:
    • Investigations: intake date to closure date
    • Due diligence: request date to clearance date
    • Policy updates: drafting starts from the published version
    • Regulatory change: trigger identification to implementation
  • Track AI-assisted versus non-AI-assisted cycle times to isolate the impact.

Quality Defects

  • Define defects as “items requiring material correction after initial completion,” including:
    • Incomplete documentation
    • Wrong risk rating or missing rationale
    • Incorrect regulatory mapping
    • Reopened cases due to insufficient analysis
    • Audit findings tied to workflow execution
  • Capture defects through QA sampling, supervisor review logs, audit results, and post-incident reviews.

Burnout Indicators

  • Run a quarterly pulse survey with 5–7 questions on workload, pace, clarity, and ability to disconnect.
  • Track voluntary attrition and vacancy duration for compliance roles.
  • Include aggregate HR indicators such as overtime trends or sick leave usage, where available.
  • Use a composite score and trend it. The trend line is what matters.

The key is to build instrumentation without creating a culture of monitoring employees. Your goal is not to watch people. Your goal is to protect the control environment.

Adopt an Enterprise AI Practice Standard Now

For an innovation-forward company, the right move is not to slow down. The right move is to govern how you speed up. Your call to action is simple and strong: to adopt an enterprise AI practice standard as management guidance, owned by Compliance, implemented across all GRC workflows, measured by five work-intensification KPIs, and tested by internal audit and red teaming.

If you do that, you gain three things immediately:

1. A sustainable operating model

2. Defensible governance for regulators and boards

3. A compliance function that remains credible under pressure

AI can make compliance better. But only if the humans who run compliance can still breathe.

Categories
Daily Compliance News

Daily Compliance News: February 17, 2026, The All FT Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • A KPMG partner was fined for using AI to cheat on a test about AI. (FT)
  • An Indian billionaire and his company’s missing billions. (FT)
  • Rethinking Board pay in the UK. (FT)
  • Measurable gains from using AI are now seen. (FT)
Categories
AI Today in 5

AI Today in 5: February 17, 2026, The Measurable Gains Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Measurable gains are now being achieved with AI. (FT)
  2. The hidden cost of poor compliance conciliation. (FinTechGlobal)
  3. AI at Kraken Compliance. (Kraken Blog)
  4. Is a memory chip crisis coming? (Bloomberg)
  5. AI worries erase $1tn from Big Tech values. (PYMNTS)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Innovation in Compliance

Innovation in Compliance: Navigating AI: Governance, Risk with some Culture Thrown in with Matt Kunkel

Innovation spans many areas, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode,  host Tom Fox interviews Matt Kunkel, CEO and Co-Founder at LogicGate, about the company’s governance, risk, and compliance (GRC) platform and current market trends.

Matt recounts his path into regulatory risk and compliance work that led to founding LogicGate and launching its Risk Cloud platform in 2015. A major focus is AI governance. Tom and Matt explore how and why senior management is asking compliance teams to provide governance frameworks despite the absence of a single standard (e.g., NIST/ISO/SOC). Matt explains organizations need scalable processes to triage and route large volumes of AI usage requests, apply guardrails based on data sensitivity and criticality, and avoid becoming a bottleneck to innovation. He emphasizes training and culture to address employee misuse, highlighting risks of exposing proprietary data and the need to define what information is acceptable to input into AI models.

The discussion turns to LogicGate’s culture and how it has been sustained during rapid, organic growth (no acquisitions). Matt outlines LogicGate’s six values: Be as One, Embrace Your Curiosity, Empower Customers, Raise the Bar, Own It, and Do the Right Thing. For evaluating AI and modernizing compliance programs, he frames value in three outcomes: making money, reducing costs, or reducing risk, and describes LogicGate’s value realization framework that translates efficiency and ROI into business terms. He also describes Risk Cloud as an orchestration layer for compliance programs and anticipates more “intentional AI” and selective use of agentic capabilities rather than fully autonomous end-to-end program execution.

 

Key highlights:

  • From Consulting to GRC: Coding, Madoff Investigation, and Founding LogicGate
  • Why AI Is Supercharging the “G” in GRC
  • LogicGate’s Culture Playbook: Values That Scale with Hypergrowth
  • How to Evaluate AI Tools in Compliance: Proving Value, ROI, and “Intentional AI”
  • Cybersecurity in 2026: AI-Powered Social Engineering, Deepfakes, and Risk Mapping
  • What’s Next for GRC by 2030: Agents, Responsible AI, and Tech as the Glue

Resources:

Matt Kunkel on LinkedIn

LogicGate

Innovation in Compliance was recently ranked Number 4 in Risk Management by 1,000,000 Podcasts.