Categories
Blog

Data Governance, Privacy, and Model Integrity: The Control Foundation of AI Governance

Artificial intelligence may look like a technology story on the surface, but beneath that surface lies a governance reality every board and Chief Compliance Officer must confront. AI systems are only as sound as the data that feeds them, the controls that govern them, and the integrity of the outputs they generate. When data governance is weak, privacy obligations are poorly managed, or model integrity is assumed rather than tested, AI risk can move quickly from a technical flaw to enterprise exposure.

In the prior blog posts in this series, I examined the foundational questions of AI governance: board oversight and accountability, and the danger of strategy outrunning governance. Today, I want to turn to a third issue that sits at the core of every credible AI governance program: data governance, privacy, and model integrity.

This is where the AI conversation often moves from excitement to discipline. Companies may be eager to deploy tools, automate functions, and improve decision-making. But none of that matters if the underlying data is flawed, sensitive information is mishandled, or the model produces outputs that are unreliable, biased, or impossible to explain in context—the more powerful the technology, the more important the governance framework beneath it.

For boards and CCOs, this is not simply a technical control matter. It is a governance matter because failures in data integrity, privacy management, and model performance can have legal, regulatory, reputational, financial, and cultural consequences simultaneously.

AI Governance Begins with the Data

There is an old saying in technology: garbage in, garbage out. In the AI era, that phrase remains true, but it is no longer sufficient. In corporate governance terms, the problem is not merely bad data. It is unknown, unauthorized, untraceable, biased, stale, overexposed, or used in ways the organization never properly approved. That is why data governance is the control foundation of AI governance.

Every AI use case depends on inputs. Those inputs may include structured internal data, public information, personal data, third-party data, proprietary records, historical documents, transactional records, prompts, or user interactions. If management does not understand where that data comes from, who has rights over it, whether it is accurate, how it is classified, and whether it is appropriate for the intended purpose, then the company is not governing AI. It is merely using it.

For compliance professionals, this point should feel familiar. Data governance is not new. What is new is the speed and scale at which AI can amplify data weaknesses. A spreadsheet error may affect one report. A flawed AI input may affect thousands of interactions, recommendations, or decisions before anyone notices.

Why Boards Should Care About Data Lineage

Boards do not need to become technical experts in model training or data architecture. But they do need to ask whether management understands the provenance and reliability of the information flowing into critical AI systems.

At a governance level, this is a question of data lineage. Can the company trace the source of the data, how it was curated, whether it was changed, and whether it was approved for the intended use? If a customer, regulator, employee, or auditor asks why the system reached a particular result, can management explain not only the output, but the data conditions that shaped it?

A board that does not ask these questions risks receiving polished dashboards and impressive demonstrations while missing the underlying weaknesses. AI systems can sound authoritative even when they are wrong. That is part of what makes governance here so essential. Confidence is not the same as integrity.

This is also where the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) offers a helpful mindset. The ECCP pushes companies to think in terms of operational reality. Do policies work in practice? Are controls tested? Is the company learning from what goes wrong? The same discipline applies here. A company should not assume its data environment is fit for AI simply because it has data available. It should test, verify, document, and challenge that assumption.

Privacy Is Not an Adjacent Issue

Too many organizations still treat privacy as adjacent to AI governance rather than central to it. That is a mistake. AI systems often rely on data sets that include personal information, employee information, customer records, usage patterns, communications, or behavior-based inputs. Even when a company believes it has de-identified or anonymized data, there may still be re-identification risks, overcollection concerns, retention issues, or use limitations tied to law, contract, or internal policy.

For the board and the CCO, privacy should not be discussed as a compliance side note. It should be part of the approval and governance architecture from the outset. Before an AI use case is deployed, management should understand what personal data is involved, whether its use is permitted, what notices or disclosures apply, what access restrictions are required, how the data will be retained, and whether any vendor relationships create additional privacy exposure.

This is particularly important in generative AI environments, where employees may paste confidential, proprietary, or personal information into tools without fully appreciating the consequences. A privacy incident in the AI context may not begin with malicious intent. It may begin with convenience. That is why governance must focus not only on policy, but on system design, training, and usage constraints.

The CCO has a critical role here because privacy governance often intersects with policy management, employee conduct, training, investigations, and disciplinary response. If privacy is left solely to specialists without integration into the broader governance process, the organization risks building fragmented controls that do not hold together under pressure.

Model Integrity Is a Governance Question

Model integrity sounds like a technical term, but it is really a governance concept. It asks whether the system is performing in a manner consistent with its intended purpose, risk classification, and control expectations.

That means asking hard questions. Is the model accurate enough for the use case? Has it been validated before deployment? Are there known limitations? Does it perform differently across populations or scenarios? Can outputs be reviewed in a meaningful way by human decision-makers? Are there conditions under which the model should not be used? These are not engineering questions alone. They are governance questions because they determine whether management is relying on the system responsibly.

This is where NIST’s AI Risk Management Framework is especially valuable. NIST emphasizes that organizations should map, measure, and manage AI risks, including those related to validity, reliability, safety, security, resilience, explainability, and fairness. It is not enough to say that a tool works most of the time. The organization must understand where it may fail, how failure will be detected, and what safeguards are in place when it does.

ISO/IEC 42001 reinforces the same discipline through the lens of management systems. It requires structured attention to risk identification, control design, monitoring, documentation, and continual improvement. In other words, it treats model integrity not as a technical aspiration, but as an organizational responsibility. For boards, the takeaway is direct: if management cannot explain how model integrity is validated and maintained, then the board does not yet have assurance that AI is being governed effectively.

Third Parties Increase the Stakes

One of the more dangerous assumptions in AI governance is that outsourcing technology also outsources risk. It does not. Many organizations will deploy AI through third-party vendors, embedded tools, software platforms, or external service providers. That may be practical, even necessary. But it also means the company may be relying on data practices, training methods, model assumptions, or privacy safeguards it did not design and cannot fully see.

That is why data governance, privacy, and model integrity must extend to third-party risk management. Procurement cannot focus solely on functionality and price. Legal cannot focus solely on contract form. Compliance, privacy, security, and risk all need to understand what the vendor is doing, what data is being used, what rights the company has to inspect or question performance, and what happens when the vendor changes the model or its underlying terms.

This is not simply good vendor management. It is a governance necessity. A company remains accountable for business decisions made using third-party AI tools, especially when those tools affect customers, employees, compliance obligations, or regulated activities.

Documentation Is What Makes Governance Real

As with every major governance issue, documentation is what turns theory into evidence. If a company is serious about data governance, privacy, and model integrity, it should have records that show it. Those records may include data inventories, data classification standards, model validation summaries, privacy assessments, vendor due diligence files, testing results, approved use cases, control requirements, escalation logs, and remediation actions. Without this documentation, governance becomes anecdotal. With it, governance becomes reviewable, auditable, and improvable.

This is another place where the ECCP mindset is so useful. Prosecutors and regulators tend to ask the same core question in different ways: how do you know your program works? In the AI context, the answer cannot be “our vendor told us so” or “the business says the tool is helpful.” It must be grounded in evidence, testing, and management discipline.

What Boards and CCOs Should Be Pressing For

Boards should expect management to present AI use cases with enough clarity to answer four questions. What data is being used? What privacy implications attach to that use? How has model integrity been tested? What controls will remain in place after deployment?

CCOs should press equally hard from the management side. Is there a documented data governance process for AI? Are privacy reviews built into the intake and approval process? Are models validated according to risk? Are third-party tools subject to diligence and contract controls? Are incidents and anomalies logged and investigated? Are employees trained not to expose confidential or personal information through improper use? These are not burdensome questions. They are the practical questions that separate governed AI from hopeful AI.

Governance Requires Trustworthy Inputs and Defensible Outputs

In the end, AI governance depends on a simple but demanding truth: the organization must be able to trust what goes into the system and defend what comes out of it.

If the data is poorly governed, privacy rights are handled casually, or model integrity is assumed rather than demonstrated, then no amount of strategic enthusiasm will make the program safe. Boards will not have real oversight. CCOs will not have a defensible control environment. The company will merely have a faster way to create risk.

That is why data governance, privacy, and model integrity are not support issues in AI governance. They are central issues. They determine whether the enterprise is using AI with discipline or simply hoping for the best.

In the next article in this series, I will turn to the fourth governance challenge: ongoing monitoring, where many organizations discover that approving an AI use case is far easier than governing it after it goes live.

Categories
AI Today in 5

AI Today in 5: March 18, 2026, The AI Compliance in GCC Companies Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Data privacy and compliance in the age of AI. (BloombergLaw)
  2. Agentic AI in insurance claims. (FinTechGlobal)
  3. Leading through AI transformation in FinTech. (Forbes)
  4. AI compliance in GCC organizations. (BizToday)
  5. Does healthcare need specialized AI? (HBR)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Great Women in Compliance

Great Women in Compliance – Building Trust at the Speed of Technology

In this episode of Great Women in Compliance, co-host Dr. Hemma Lomax welcomes Shannon Ralich, Vice President of Compliance and Chief Privacy Officer at Machinify, to discuss the evolving landscape of data privacy, cybersecurity, and responsible AI.

Shannon shares her remarkable journey from a curious child taking apart electronics to a seasoned leader blending technology, law, and strategy. She offers insight into how curiosity and creativity can fuel governance excellence and explains what it means to design systems that anticipate risk and enable responsible innovation.

Together, Hemma and Shannon explore:

  • How privacy and cybersecurity intersect in today’s fast-evolving AI environment
  • The most pressing compliance challenges around data governance and global regulation
  • Lessons from the SolarWinds and Uber cases and the growing conversation around individual accountability for CISOs and compliance leaders
  • Practical steps for staying agile—through reliable news sources, cross-functional camaraderie, and professional networks
  • How to translate corporate compliance skills into meaningful community impact through nonprofit leadership and animal rescue advocacy

Shannon’s message is a powerful reminder that the best leaders bring their full selves to the work: technical precision, ethical clarity, and human compassion.

Biography:

Shannon Ralich is the Vice President of Compliance and Chief Privacy Officer at Machinify, a healthcare intelligence company applying AI to improve the efficiency and integrity of healthcare payments. With more than 20 years of experience across legal, compliance, privacy, and cybersecurity roles, Shannon specializes in aligning governance frameworks with business innovation.

She also serves on the Advisory Board of the Privacy Bar Section of the IAPP (International Association of Privacy Professionals). She is widely respected for her strategic, forward-thinking approach to data protection and responsible AI governance.

Beyond her professional expertise, Shannon is a passionate advocate for animal welfare. She sits on the Board of Directors for the Neuse River Golden Retriever Rescue, where she leverages her operational and technological skills to strengthen fundraising, improve systems, and support global rescue missions.

A lifelong learner and self-described “builder,” Shannon finds creativity and grounding through woodworking, outdoor adventures with her family, and contributing to causes that make both workplaces and communities more humane.

Note: The views expressed in this podcast are our own and do not represent the views of our employers, nor should they be taken as legal advice in any circumstances. 

Categories
AI Today in 5

AI Today in 5: October 13, 2025, The Bring Out Your Dead Issue Edition

Welcome to AI Today in 5, the newest edition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI, so start your day, sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5, all from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest related to AI.

Top AI stories include:

  1. Can Agentic AI lead to security lapses? (Bobs Guide)
  2. Current AI security issues. (Bloomberg Law)
  3. Wolters Kluwer jumps into AI. (CCI)
  4. Agentic AI and data privacy issues. (Security Systems News)
  5. Deepfakes of deceased people. (NBC News)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com

Categories
10 For 10

10 For 10: Top Compliance Stories For the Week Ending, October 11, 2025

Welcome to 10 For 10, the podcast that brings you the week’s Top 10 compliance stories in one podcast each week. Tom Fox, the Voice of Compliance, presents the compliance stories you need to know to end your busy week. Sit back, and in 10 minutes, hear about the stories every compliance professional should be aware of from the prior week. Every Saturday, 10 For 10 highlights the most important news, insights, and analysis for the compliance professional, all curated by the Voice of Compliance, Tom Fox. Get your weekly filling of compliance stories with 10 for 10, a podcast produced by the Compliance Podcast Network.

Top weekly stories include:

  • E-sports and the data privacy maze. (Bloomberg Law)
  • Does Homan have to return the $50K? (NYT)
  • Star witness in Menendez trial to be sentenced. (NYT)
  • Halkbank faces criminal charges. (FT)
  • Saudi mega-construction project under ABC investigation. (Semafor)
  • PE and the Ethics of Drug Research. (NYT)
  • $100MM wine fraud in NYC. (Bloomberg)
  • Crony capitalism and corruption. (NPR)
  • Johnson and Johnson ordered to pay $966MM in talc case. (NYT)
  • Trump is considering pardons for Maxwell and Diddy. (Reuters)

You can check out the Daily Compliance News for four curated compliance and ethics-related stories each day, here.

Connect with Tom 

Instagram

Facebook

YouTube

Twitter

LinkedIn

You can purchase a copy of my new book, Upping Your Game, on Amazon.com.

Categories
Daily Compliance News

Daily Compliance News: October 10, 2025, The Happy Birthday Lou Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, including compliance, ethics, risk management, leadership, or general interest, relevant to the compliance professional.

Top stories include:

  • Corruption in Zimbabwe-I am shocked. (Bloomberg)
  • E-sports and the data privacy maze. (Bloomberg Law)
  • Does Homan have to return the $50K? (NYT)
  • Star witness in Menendez trial to be sentenced. (NYT)
Categories
Life with GDPR

Life With GDPR – Endpoint Security and Data Protection: Uncovering the Hidden Compliance Risks in Printer Security with Jim LaRoe

Jonathan Armstrong remains on assignment. Today, Tom Fox visits with fellow Texan Jim LaRoe, CEO of Symphion, to discuss data privacy, data protection, and compliance related to printer security in one of the most interesting podcasts Tom has done in some time.

Jim provides insight into how 20-30% of network endpoints are printers, and alarmingly, 99% of these are unprotected. Printers, despite being integral to business functions, are typically left vulnerable, making them prime targets for sophisticated phishing and cyber-attacks. Jim shares his journey from a trial lawyer to founding Symphion in 1999 and explains Symphion’s groundbreaking work in developing comprehensive security software for printers. Jim highlights the importance of a culture of compliance in managing endpoint security and the multifaceted challenges that come with securing printers.  He emphasizes the collaborative effort needed among GRC compliance teams, IT, and supply chain departments to manage printer security effectively, and offers actionable steps for businesses to mitigate these risks.

Key takeaways:

  • The Hidden Risk of Printers
  • Understanding Endpoint Security
  • Challenges in Printer Security
  • Risk Management Strategies
  • Supply Chain Vulnerabilities

Resources:

Connect with Tom Fox

Connect with Jim LaRoe

Connect with Symphion

The award-winning Life with GDPR was recently honored as a Top Data Security Podcast. This was a sponsored podcast.

Categories
Daily Compliance News

Daily Compliance News: September 18, 2025, The Four Humours Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, including compliance, ethics, risk management, leadership, or general interest, relevant to the compliance professional.

Top stories include:

  • Muzzled Ben and Jerry’s founder resigns. (NYT)
  • Data Privacy Policies: To Be or Not to Be. (Reuters)
  • The 4 personality types. (BBC)
  • DOJ is about to cut loose the Binance monitor. (Bloomberg)
Categories
2 Gurus Talk Compliance

2 Gurus Talk Compliance – Episode 59 – The Foot Fetish Edition

What happens when two top compliance commentators get together? They talk compliance, of course. Join Tom Fox and Kristy Grant-Hart in 2 Gurus Talk Compliance as they discuss the latest compliance issues in this week’s episode!

 Stories this week include:

  • AI vs. AI: The Battle Over Fraudulent Receipts
  • Whistleblower Lessons: Nestlé CEO Dismissal Case
  • Forced Labor Legislation: UK and EU Developments
  • Boeing, DOJ, and the Role of Corporate Monitors
  • Workplace Activism: Managing Political Debate at Work
  • Data Privacy: French Fines Against Google and Shein
  • Corporate Wellness: Innovative Employee Perks
  • Children’s Data Privacy: Disney’s FTC Settlement
  • Florida Man Story: Compliance Lessons from the Absurd

Connect with the hosts:

Resources:

Prove Your Worth

Tom

Instagram

Facebook

YouTube

Twitter

Categories
AI Today in 5

AI Today in 5: August 11, 2025, The ACHILLES Project Episode

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

  • Will the ACHILLES Project simplify AI regs in the EU? (InnovationNewsNetwork)
  • AI – data privacy and governance in pharma. (EPR)
  • Compliance risks with AI integration. (InsuranceBusinessMag)
  • GenAI for tax and customs compliance. (IMF)
  • Will GenAI end ‘check the box’ compliance? (CCI)

For more information on the use of AI in compliance programs, see Tom Fox’s new book, Upping Your Game. You can purchase a copy of the book on Amazon.com.