Categories
Blog

Embedded Explainability: Turning Principles into Proof

Embedded explainability is the design choice to build “the why” directly into a system as it operates, rather than bolting on an explanation after the fact. In practical terms, it means the model or decision engine is instrumented to surface the key factors that drove a specific output as the output is delivered. In a compliance, risk, or fraud context, this can include reason codes tied to specific data features, a clear confidence score, the policy or control implicated, and a short narrative that translates technical drivers into business language. The point is not to turn every decision into a science project; the point is to make explanations an always-on product requirement, so investigators, managers, and auditors can quickly understand what the system saw, why it escalated, and what evidence supports the action.

Where this becomes powerful is in governance. Embedded explainability creates a durable audit trail and makes accountability real: you can test whether explanations are consistent over time, whether they drift, whether similarly situated cases are treated consistently, and whether the system is relying on inappropriate proxies. It also reduces the “black box” tax during exams and internal reviews because your documentation is generated continuously, decision by decision, rather than recreated under a deadline. Done well, embedded explainability supports model risk management, accelerates case resolution, and builds user trust because the system does not just tell you what to do. It shows its work in a way that is usable for first-line teams and defensible for second-line and regulators.

If you have been in a single AI governance meeting, you have heard the same reassuring words: transparency, fairness, accountability. They sound good. They also do not answer the one question your Audit Committee will ask you the minute something goes sideways: can you prove what happened, who approved it, and why the system did what it did?

That is the heart of embedded explainability for a GRC or compliance professional. It is not a debate about data science. It is about building a program that can withstand scrutiny. In a strong compliance program, “principles” are not controls. They are intentions. Regulators, prosecutors, and auditors do not award credit for intent. They want evidence of implementation and effectiveness. When you embed explainability, you are building evidence into the workflow itself, so the program produces audit-ready artifacts without heroics.

Think like an auditor, not like a vendor.

In many organizations, “explainability” is treated like a technical deliverable. Someone pulls a chart. Someone cites an algorithm. Everyone nods. Then, the internal audit asks a simple question: “Show me how this use case was approved, how risks were assessed, how testing was performed, and how you monitor it today.”

That is where compliance needs to reframe the conversation. For GRC, the most important explainability is process explainability:

  • Who approved the use case, and what decision impact does it have?
  • What risks were identified, and what mitigations were required?
  • What data and content sources were used, and how they are governed.
  • What testing was done, what thresholds were applied, and what failed.
  • Who monitors the system in production, and how issues get escalated.
  • How changes are controlled, logged, and reapproved

If you can answer those questions with documentation, you can pull on demand; you are not “talking about explainability.” You are demonstrating it.

The risk that hides in plain sight: language and cultural bias

Most compliance teams understand bias as a broad concept. The operational problem manifests in a narrower, more painful way: language and cultural bias within everyday compliance workflows. Consider the real-life places your organization may be using AI or analytics: hotline intake, investigations triage, monitoring and surveillance, third-party diligence, audit planning, policy interpretation, and case summarization. Now add the facts of corporate life: multilingual reporting, non-native English narratives, regional idioms, and different cultural communication styles.

Here is the compliance risk: the system may not be “biased” in a headline-grabbing way. It may be biased in a quiet, compounding way:

  • A hotline narrative written in non-native English is scored lower for credibility.
  • Regional phrasing triggers false positives in monitoring.
  • Direct communication styles are interpreted as “aggressive” or “retaliatory”;
  • Reports from certain geographies are deprioritized because of linguistic patterns; and
  • Summaries strip context from culturally specific descriptions of harm.

This is why embedded explainability matters. If the system cannot tell you why it scored and routed a case the way it did, you will not find these problems until someone outside the company points them out to you.

A compliance-led lifecycle that makes explainability real

The practical move is to treat embedded explainability as a lifecycle requirement, not a go-live checkbox. You want stage gates with documented approvals and an evidence pack that travels with the use case from intake to monitoring. Think of it as the same discipline you already apply to third parties, controls testing, and investigations: define, document, test, approve, monitor, and improve.

A simple compliance-led lifecycle looks like this:

  1. Intake and approval: What is the use case, what is the decision impact, and who is accountable?
  2. Data and language risk assessment: What data is used, what languages and regions are in scope, and what bias risks exist?
  3. Build with traceability: Document the logic, rules, prompts, and human review points.
  4. Testing: Prove the system can be reconstructed and does not degrade across language groups.
  5. Deployment readiness: Confirm monitoring, access controls, logging, and escalation are active.
  6. Ongoing monitoring: Report drift, exceptions, overrides, and bias findings; reapprove material changes.

This is the compliance function earning its keep; not by arguing about definitions, but by building a governance machine that produces defensible evidence.

The minimum evidence pack: what you should be able to pull on demand

If you want to operationalize embedded explainability, standardize the artifacts. Do not let every team reinvent documentation. Your minimum evidence pack should be consistent across machine learning models, rules-based analytics, LLM workflows, and decision engines.

At a minimum, you should be able to produce:

  • Use case charter: purpose, scope, decision impact, owner, risk tier, approvals;
  • Data and language risk assessment: sources, language coverage, cultural risk factors, mitigations;
  • System specification: what it is, how it works, where humans intervene;
  • Testing artifacts: bias test plan, scenario tests, results, remediation notes;
  • Explainability checklist: proof you can reconstruct inputs, steps, outputs, and rationale;
  • Deployment approval record: stage-gate sign-offs and dates;
  • Monitoring and drift reports: trends, exceptions, and escalation notes;
  • Incident and escalation log: root cause, corrective actions, closure dates, and
  • Change management log: what changed, materiality, retesting, reapproval.

If you have this, you have something most organizations still lack: a system of record for AI governance that internal and external auditors can actually test.

The Bottom Line

Embedded explainability is how you turn AI governance from a values statement into a control environment. It is how you protect innovation by making it defensible. If your program can reconstruct decisions, show approvals, demonstrate testing, and document monitoring, you are not hoping you are compliant. You are ready to prove it. 

Categories
Innovation in Compliance

Innovation in Compliance: Navigating AI: Governance, Risk with some Culture Thrown in with Matt Kunkel

Innovation spans many areas, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode,  host Tom Fox interviews Matt Kunkel, CEO and Co-Founder at LogicGate, about the company’s governance, risk, and compliance (GRC) platform and current market trends.

Matt recounts his path into regulatory risk and compliance work that led to founding LogicGate and launching its Risk Cloud platform in 2015. A major focus is AI governance. Tom and Matt explore how and why senior management is asking compliance teams to provide governance frameworks despite the absence of a single standard (e.g., NIST/ISO/SOC). Matt explains organizations need scalable processes to triage and route large volumes of AI usage requests, apply guardrails based on data sensitivity and criticality, and avoid becoming a bottleneck to innovation. He emphasizes training and culture to address employee misuse, highlighting risks of exposing proprietary data and the need to define what information is acceptable to input into AI models.

The discussion turns to LogicGate’s culture and how it has been sustained during rapid, organic growth (no acquisitions). Matt outlines LogicGate’s six values: Be as One, Embrace Your Curiosity, Empower Customers, Raise the Bar, Own It, and Do the Right Thing. For evaluating AI and modernizing compliance programs, he frames value in three outcomes: making money, reducing costs, or reducing risk, and describes LogicGate’s value realization framework that translates efficiency and ROI into business terms. He also describes Risk Cloud as an orchestration layer for compliance programs and anticipates more “intentional AI” and selective use of agentic capabilities rather than fully autonomous end-to-end program execution.

 

Key highlights:

  • From Consulting to GRC: Coding, Madoff Investigation, and Founding LogicGate
  • Why AI Is Supercharging the “G” in GRC
  • LogicGate’s Culture Playbook: Values That Scale with Hypergrowth
  • How to Evaluate AI Tools in Compliance: Proving Value, ROI, and “Intentional AI”
  • Cybersecurity in 2026: AI-Powered Social Engineering, Deepfakes, and Risk Mapping
  • What’s Next for GRC by 2030: Agents, Responsible AI, and Tech as the Glue

Resources:

Matt Kunkel on LinkedIn

LogicGate

Innovation in Compliance was recently ranked Number 4 in Risk Management by 1,000,000 Podcasts.

Categories
Innovation in Compliance

Innovation in Compliance – Proactive Compliance Frameworks for Evolving AI Regulations with Yakir Golan

Innovation occurs across many areas, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, host Tom Fox welcomes Yakir Golan, CEO & Co-founder at Kovrr, who shares his professional journey from the Israeli intelligence community to his current role at Kovrr.

With a rich background in Israel’s intelligence community and significant experience with cybersecurity vendors, Golan champions integrating frameworks with analytics to effectively assess and navigate risks, emphasizing governance as a vital component for sustained innovation. He advocates proactive measures to address AI-enabled insider threats, urging businesses not to wait for perfect regulatory clarity amid the fast-paced evolution of AI technologies. Golan’s holistic approach to compliance transcends mere regulatory adherence, focusing on business-driven proficiency in cybersecurity and AI to meet the dynamic demands of the business landscape.

 

Key highlights:

  • Financial Models for AI Risk Governance
  • Enhancing AI Governance with Adaptive Frameworks
  • Empowering Innovation Through Strategic Governance and Compliance
  • Unified Approach: AI-Cybersecurity in Enterprise Risk Management

Resources:

Yakir Golan on LinkedIn

Kovrr 

Innovation in Compliance was recently ranked Number 4 in Risk Management by 1,000,000 Podcasts.

Categories
AI Today in 5

AI Today in 5: January 29, 2026, The AI Has Competitive Advantage Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Turning AI governance into a competitive advantage. (FinTechGlobal)
  2. AI is rewriting compliance. (BleepingComputer)
  3. Decoding the human genome with AI. (NYT)
  4. Who is training AI to do your job? (FT)
  5. One way to keep AI out of the classroom. (NPR)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
AI Today in 5

AI Today in 5: January 23, 2026, The Greatest AI Challenge Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  • South Korea adds new AI regulations. (Reuters)
  • Vietnam updates IP & AI law. (Rouse)
  • AI’s greatest challenge is managerial, not technical. (Bloomberg)
  • With AI, compliance data is more valuable than ever. (FinTechGlobal)
  • AI assists retailers in stopping return fraud. (CBS News)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Innovation in Compliance

Innovation in Compliance: Transforming from Hierarchy to High Performance: Governance and AI in 2026

Innovation occurs across many areas, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode,  host Tom Fox welcomes guests Bill Sanders, Olivia Storelli, and Andrew Stevens to explore the theme ‘From Hierarchy to High Performance’ in the context of AI and corporate governance.

They take a deep dive into the critical role of AI governance, highlighting its importance for accountability and competitive advantage, and stress the need for decentralized, automated governance to ensure fair and unbiased outcomes. The discussion also covers the interplay between leadership, accountability, and culture in achieving AI success, and outlines the three primary functions of AI: customer relationships, operations, and business models. The episode emphasizes the need for execution over ambition for AI value creation and addresses how legal and compliance professionals can keep pace with the rapidly changing business environment through AI.

Key highlights:

  • The Importance of AI Governance
  • Distributed Governance and Compliance
  • AI’s Impact on Business Models and Operations
  • Decentralization and High Performance

Resources:

Download the AI Executive Whitepaper:

Text the word PLAYBOOK to 415.960.1161. 

or

Visit https://whitepaper.download/

  • Websites

https://roeblingstrauss.com/

https://www.sakurasky.com/

• LinkedIn 

LinkedIn: Bill Sanders

LinkedIn: Olivia Storelli

LinkedIn: Andrew Stevens

Books:

Innovation in Compliance was recently ranked 4th among Risk Management podcasts by 1,000,000 Podcasts.

Categories
Compliance and AI

Compliance and AI – Transforming Cloud Investments: The Role of AI Governance

What is the intersection of AI and compliance? What about Machine Learning? Are you using ChatGPT? These questions are just three of the many we will explore in this cutting-edge podcast series, Compliance and AI, hosted by Tom Fox, the award-winning Voice of Compliance. Today, Tom looks at AI and governance with 3 guests, Bill Sanders, Olivia Storelli, and Andrew Stevens.

Bill Sanders, Olivia Storelli, and Andrew Stevens are leading voices in the discourse on AI governance and guardrails, each bringing a unique perspective. Bill, a leader in brand management and consulting, views AI governance as essential for leveraging AI’s potential, emphasizing the need for decentralized decision-making and strategic oversight to ensure safety and strategic foresight. Olivia, CEO of Sakura Sky, underscores the importance of aligning strategy with practical technology execution, advocating for governance as a means to achieve rapid value while maintaining safety and innovation. Andrew, an expert in cloud technology, highlights the need for governance to manage AI’s risks and liabilities, calling for executive leadership to define permissible data use and decision-making to foster a robust, accountable AI implementation. Together, they stress the importance of clear guidelines, organizational readiness, and leadership involvement in navigating the complexities of AI adoption and ensuring its safe and effective integration into business operations.

Key highlights:

  • AI governance is crucial for safe and efficient deployment of artificial intelligence systems in organizations.
  • Collaboration and a mindset shift towards compliance professionals as enablers are essential for safe AI adoption.
  • AI compliance impacts trust, fairness, and security within organizations.
  • Leadership, accountability, and culture are key to success in AI projects.
  • A phased approach with executive sponsorship is crucial for implementing the AI roadmap.

Resources:

Download the AI Executive Whitepaper:

Text the word PLAYBOOK to 415.960.1161. 

or

Visit https://whitepaper.download/

  • Websites

https://roeblingstrauss.com/

https://www.sakurasky.com/

LinkedIn 

LinkedIn: Bill Sanders

LinkedIn: Olivia Storelli

LinkedIn: Andrew Stevens

Books

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Innovation in Compliance

Innovation in Compliance: 10+1 Commandments: A Moral Code for AI Ethics in Business

Innovation comes in many forms, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, host Tom welcomes Cristina DiGiacomo, founder of 10P1 Inc.

Cristina has an extensive background in communications, business, and practical philosophy. Cristina introduces her ’10+1 Commandments,’ a set of ethical guidelines for human interaction with artificial intelligence. They discuss the compelling need to integrate these principles into business compliance and governance frameworks. The commandments aim to provide a high-level, universal, and perpetual moral code that addresses the risks and ethical considerations of AI in the corporate world. Cristina emphasizes the importance of maintaining ethical AI practices amidst the evolving regulatory landscape.

Key highlights:

  • Philosophy in Everyday Life
  • Ancient Wisdom and Modern Application
  • The 10+1 Commandments Explained
  • Applying the Commandments in Business
  • Governance and Ethical AI

Resources:

Cristina DiGiacomo on LinkedIn

Website-10+1 

Categories
FCPA Compliance Report

FCPA Compliance Report – Nicole Di Schino on Harnessing AI for Compliance: Governance, Risks, and Best Practices

Welcome to the award-winning FCPA Compliance Report, the longest-running podcast in compliance. In this episode, Tom welcomes Nicole Di Schino, Principal Compliance Services Consultant at Diligent’s Spark Compliance Group, to discuss how best to harness AI for your compliance regime through 2026 and beyond.

Nicole and Tom discuss the critical importance of AI governance, compliance, and modern GRC. They cover practical steps for developing comprehensive compliance programs, emphasizing the necessity for AI risk assessments, the establishment of AI governance committees, and the implementation of human oversight in AI processes. Nicole highlights the intrinsic risks of AI, including privacy concerns and AI bias, and shares her personal experiences with AI’s impact in educational settings. Tom underscores the role of compliance education, advocating for the broader view of compliance as an ambassadorial and academic function. This session also explores the integration of AI into compliance workflows and the essential role of board and committee oversight.

 

Resources:

Nicole Di Schino on LinkedIn

Diligent Website

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
AI Today in 5

AI Today in 5: December 4, 2025, The Microsoft Blips Edition

Welcome to AI Today in 5, the newest edition of the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Does AI portend the end of the law/consulting firm pyramid? (FT)
  2. Strengthening AI strategies with proactive compliance. (WSJ)
  3. Microsoft stock dips on the news. (CNBC)
  4. Salesforce touts AI adoption. (Bloomberg)
  5. Strong AI governance can foster innovation. (Bloomberg)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.