Categories
Blog

AI Governance and Speak-Up Culture: The Earliest Warning System May Already Be in Your Workforce

There is a hard truth about AI governance that too many companies are still avoiding: the first people to spot an AI problem are usually not board members, not senior executives, and not even the governance committee. It is the employee using the tool, reviewing the output, dealing with the customer, watching the workflow break down, or seeing the machine produce something that feels off. That is why AI governance is not only about policies, models, controls, and oversight structures. It is also about culture. More specifically, it is about a culture of speaking up.

If employees see an AI tool making questionable recommendations, generating inaccurate summaries, mishandling sensitive information, producing biased outcomes, or being used beyond its approved purpose, do they know that this is a reportable issue? Do they know where to raise it? Do they believe someone will listen? Do they trust that raising a concern will help rather than harm their career? Those are not soft questions. They are governance questions.

In anti-corruption compliance, we have long since learned that hotlines, reporting channels, and anti-retaliation protections are not mere ethical ornaments. They are detection mechanisms. They are how organizations surface risks before they become scandals. AI governance now needs the same mindset. If your employees are your earliest warning system, then your speak-up culture may be one of your most important AI controls.

Why Employees See AI Failures First

AI rarely fails in the abstract. It fails in use. A board deck may describe a tool in elegant terms. A vendor demo may look polished. A pilot may be carefully supervised. But once a system enters daily operations, it interacts with real people, real data, real pressures, and real shortcuts. That is when the problems begin to show themselves.

An employee may notice that a tool is confidently wrong. A manager may realize that staff are over-relying on generated summaries without checking the source material. Someone in HR may see that a screening tool is producing odd results. A sales employee may notice that a customer-facing chatbot is inventing answers. A compliance analyst may find that an AI-assisted monitoring process is missing obvious red flags. A procurement professional may discover that a vendor quietly changed a feature set or data practice.

In each of those examples, the problem shows up at the point of use, not at the point of approval. That is why the old compliance lesson still applies: the people closest to the work are often closest to the risk. In AI governance, that means employees are often the first line of detection. But detection is useless if the culture tells them to keep their heads down.

The Governance Blind Spot

Many organizations are investing significant effort in AI principles, governance committees, acceptable-use policies, and risk classification. That is all important. But many of these programs have a blind spot. They are built as if AI risk will reveal itself only through formal testing, audit reviews, or leadership dashboards. It will not.

Some AI failures will surface through monitoring and controls. But many will first appear as employee discomfort, confusion, skepticism, or observation. Someone will notice that a tool is being used in a way that feels wrong. Someone will catch a factual error before it leaves the building. Someone will realize that human review is not actually happening. Someone will see mission creep. Someone will spot a gap between policy and practice.

If the governance model does not actively encourage employees to raise those concerns, the company has built an AI oversight program with one eye closed. That is a dangerous place to be because AI risk is often cumulative. A small issue ignored today becomes a larger issue tomorrow. An inaccurate output tolerated in a low-stakes setting becomes normalized in a higher-stakes one. A quietly expanded use case becomes a de facto business process. Silence is how minor flaws become systemic failures.

Speak-Up Culture as an AI Control

Let us be clear about terms. Speak-up culture is not simply a hotline number posted on the intranet. It is the set of signals an organization sends about whether employees are expected, supported, and protected when they raise concerns.

In the AI context, a healthy speak-up culture means employees understand that reporting concerns about AI outputs, use cases, data handling, or control failures is part of responsible business conduct. It means managers know that AI concerns are not “just tech issues” to be brushed aside. It means investigators and compliance teams are prepared to triage and assess AI-related reports intelligently. It means retaliation protections apply as much to someone challenging a machine-enabled workflow as they do to someone reporting bribery, harassment, or fraud.

This matters because AI can create a special kind of silence. Employees may hesitate to challenge a system that leadership has praised as innovative. They may worry that questioning the tool makes them sound resistant to change or insufficiently sophisticated. They may assume someone more senior has already validated the output. They may think, “Surely the machine knows better than I do.” That is exactly the kind of cultural dynamic compliance should distrust.

Machines do not deserve deference. Controls deserve scrutiny. A mature AI governance program, therefore, needs to treat employee reporting as a formal part of its control environment. Speak-up culture is not adjacent to AI governance. It is part of AI governance.

What CCOs Should Be Asking

If you are a Chief Compliance Officer, there are several questions you should be asking right now.

First, do employees understand that AI-related concerns are reportable? Many organizations have not made this explicit. Staff know they should report harassment, bribery, theft, and retaliation. They may not know whether to report unreliable AI output, a suspicious recommendation, a data input concern, or a business team using a tool outside its approved scope. If you have not told them, do not assume they know.

Second, are your reporting channels equipped to receive AI-related concerns? Hotline categories, case-intake forms, and triage protocols may need to be updated. If an employee reports that an AI tool is generating misleading outputs in a regulated workflow, who receives that report? Compliance? Legal? Security? IT? HR? Some combination? If ownership is unclear, reports will stall, and stalled reports teach employees not to bother.

Third, are managers trained to respond appropriately when AI concerns are raised informally? This is critical. Many concerns will not begin in a hotline. They will begin in a meeting, a hallway conversation, a team chat, or an email to a supervisor. If the manager shrugs, dismisses, or minimizes the issue, the detection system fails before it starts.

Fourth, are anti-retaliation protections being reinforced in the AI context? Employees who challenge AI use may be questioning a high-profile project, a popular vendor, or a senior executive’s initiative. That can create subtle pressure to stay quiet. Compliance should be ahead of that dynamic, not behind it.

Building an AI Speak-Up Framework

What does a practical approach look like?

The first step is to define what types of AI concerns employees should raise. Be concrete. Tell them to report suspected misuse of AI tools, outputs that appear inaccurate or biased, use of AI in sensitive decisions without proper review, input of restricted data into unapproved systems, unauthorized expansion of use cases, missing human oversight, and vendor or system changes that appear to alter risk.

The second step is to build AI examples into training and communication. Employees need realistic scenarios, not vague encouragement. Show them what an AI red flag looks like. Show them what “raising a hand” looks like. Show them where to go and what happens next.

The third step is to update the hotline and investigations protocols. Add intake categories if needed. Develop triage guidance. Decide when AI matters should be handled as compliance cases, operational incidents, model-risk issues, or cross-functional reviews. The goal is not bureaucracy. The goal is clarity.

The fourth step is to train managers as escalation points. In every effective compliance program, middle management is the translation layer between policy and daily operations. AI governance is no different. Managers need to know when a concern can be resolved locally, when it must be escalated, and when the pattern itself suggests a control problem.

The fifth step is to close the feedback loop. Employees are more likely to report concerns when they believe reporting leads to action. That does not mean revealing confidential case details. Communicating that the company takes these issues seriously, investigates them, learns from them, and improves controls as needed. Silence from management breeds silence from employees.

What to Monitor in an AI Speak-Up Program

Here is where compliance can bring its trademark discipline. Track the volume and type of AI-related concerns. Look for concentration by business unit, geography, or tool. Monitor whether concerns are coming in through formal hotlines or informal channels. Review time to triage and time to resolution. Look for patterns involving data handling, output reliability, human review failures, or scope creep. Compare the reported concerns with the company’s list of approved use cases. If you see repeated confusion or repeated exceptions, that tells you something important about your governance design.

Just as importantly, look for the absence of reporting. If your company has materially deployed AI tools and no employee has ever raised a concern, I would not automatically celebrate. I would ask whether employees know what to report, trust the channels, or believe leadership wants candor. In compliance, no reports can mean no problems. It can also mean no trust. Wise CCOs know the difference is everything.

Why This Is Good for Business

Some executives still hear “speak-up culture” and think of delay, friction, and complication. I hear something different. I hear early detection, faster correction, and better decision-making.

A workforce that feels empowered to raise AI-related concerns provides the company with a real-time sensing mechanism. It catches problems before they scale. It surfaces control failures before regulators, plaintiffs’ lawyers, journalists, or customers do. It gives management better information. It helps the board exercise real oversight. Most of all, it creates a culture where innovation is more sustainable because people are not afraid to challenge what does not look right. That is not anti-innovation. That is responsible innovation.

Compliance has always been at its best when it helps the business move fast without becoming reckless. Speak-up culture does exactly that. It does not tell employees to fear AI. It tells them to use judgment, raise concerns, and protect the enterprise when the technology does not behave as expected.

Final Thoughts

Every company deploying AI should ask itself a simple question: Who will notice first when something goes wrong? In many cases, the answer is your employees. The next question is even more important: have you built a culture where they will say something?

If the answer is uncertain, then your AI governance program has a serious weakness. You may have policies. You may have committees. You may have training modules and vendor reviews. But if employees do not feel empowered to raise a hand when they see a problem, then one of your most valuable detection controls is missing in action.

Categories
AI Today in 5

AI Today in 5: March 26, 2026, The 1 in 3 Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Transforming compliance in finance. (FinTechGlobal)
  2. Embedding compliance in AI adoption. (SiliconRepublic)
  3. What the National AI Framework means for employers. (TheEmployerReport)
  4. 1 in 3 patients is using AI in their healthcare. (ModernHealthcare)
  5. Solving RegTech compliance. (VocalMedia)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
GSK in China: 13 Years Later

GSK In China: 13 Years Later – How “Good Fraud” Bypassed Audits, Compliance, and IT Controls

Thirteen years after the GSK China scandal exploded onto the global stage, its lessons remain as urgent as ever for compliance professionals and business leaders. In this podcast series, we revisit the case not simply as corporate history, but as a living cautionary tale about culture, incentives, third parties, investigations, and governance. Each episode explores what went wrong, why it went wrong, and how those failures still echo in today’s compliance and ethics landscape. Join me as we unpack the scandal and draw practical lessons for building stronger, more resilient organizations. In this powerful second episode, we unpack one of the most defining corporate scandals of the past decade—the 2013 GSK China bribery case. More than a headline-making event, it’s a masterclass in how sophisticated “good fraud” can slip past audits, evade compliance safeguards, and outmaneuver IT controls.

The episode examines allegations that GSK faced a bribery and corruption scheme in China involving sums reported up to $500 million, despite extensive compliance resources, including more compliance officers in China than anywhere outside the US, up to 20 internal audits annually, and external auditing by PwC. Drawing on Reuters, the Financial Times, the Wall Street Journal, and analysis from The Texas Lawyer, it explains how bribery was described as rampant in China’s healthcare system and how payments were structured through direct cash and vouchers and, more commonly, indirect channels such as travel agencies, hospital “sponsorships,” and conference trips. It outlines “good fraud” enabled by collusion and flawless paperwork, audit materiality thresholds that miss fragmented FCPA-risk payments, China’s data-export restrictions that limit oversight, and a WSJ-reported Botox example where managers directed staff to use personal email to coordinate rewards for prescriptions, concluding with compliance program directives emphasizing IT-compliance coordination, data mapping, enforceable policies, employee reporting, and stress testing.

Key highlights:

  • How Bribes Were Paid
  • Good Fraud and Audit Failure
  • Materiality Trap and Fragmentation
  • Data Blockade and External Audits
  • Five Compliance Fixes

Resources:

GSK in China: A Game Changer for Compliance on Amazon.com

GSK in China: Anti-Bribery Enforcement Goes Global on Amazon.com

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Ed. Note: The Notebook LM created notes, the voices of the hosts, Timothy and Fiona, based upon text written by Tom Fox

Categories
Daily Compliance News

Daily Compliance News: March 26, 2026, The Mind Blowing Corruption Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • The Trump Administration is allowing COI-related corruption to continue. (CNBC)
  • Another day, another $1.5bn in insider trading. (TheHill)
  • Marco Rubio testifies in corruption trial. (NYT)
  • The US ban on Anthropic ‘looks like punishment’. (WSJ)
Categories
Hill Country Hustlers

Hill Country Hustlers: Mason County Chamber Momentum: Events, Small Business Growth & Putting Mason on the Digital Map

On Hill Country Hustlers, host Zachary Green interviews Taylor Krull, Executive Director of the Mason County Chamber of Commerce since July 2025, about her background, her move from the Midwest to Texas, and why Mason’s people and tight-knit Hill Country culture drew her in.

Taylor shares how she brought stronger organization and processes to the Chamber with volunteer help, expanded member support through social media marketing and on-site video creation, and built regional collaboration with nearby chambers. She highlights upcoming events including the Mason Arts & Wine Festival (first weekend of April), a Mason–Brady joint mixer at Peters Prairie (April 9), a May mixer (May 7), quarterly Ladies Night Out (next May 15), June’s first Hotdog & Hot Rod Night with a car show and street dance, and July’s Roundup Weekend (July 11) with added festivities, plus ways Chamber membership helps businesses with visibility and promotion.

Key highlights:

  • How Mason Became Home
  • Why Mason Stood Out & Career Before the Chamber
  • Launching New Chamber Events & Hitting the Ground Running
  • Social Media for Small Towns
  • Hill Country Unity and Flood Response
  • Spring & Summer Events Lineup
  • Hill Country Community Spirit & Why Mason Feels Unique

Resources:

Follow Mason County Chamber of Commerce on:

Website

Facebook

Instagram

Categories
Blog

The “Day Two” Problem of AI Governance: What CCOs Must Monitor After the Launch

A scene is playing out in companies across the globe right now. Innovation teams are moving fast. Procurement is signing contracts. Business units are experimenting with copilots, workflow agents, and internal knowledge tools. Marketing is testing generative content. HR is evaluating AI for talent processes. Finance wants forecasting help. Security is watching from the corner. Legal is asking pointed questions. Compliance is handed the bill for governance after the train has already left the station. But the reality is that it is a board governance issue.

The problem is not that companies are moving too slowly on AI. In many organizations, the opposite is true. AI strategy is moving faster than the governance structure designed to oversee it. When that happens, the gap creates risk in ways boards understand very well: unmanaged decision-making, unclear accountability, inconsistent controls, fragmented reporting, and blind spots around operational resilience, ethics, and trust.

If you are a Chief Compliance Officer (CCO), this is your moment. Not to say no to AI. Not to become the Department of Technological Misery. But to help the board and senior leadership understand that AI governance is about capturing upside without swallowing avoidable downside. That is the central lesson. Strategy without governance is aspiration. Strategy with governance is a business discipline.

Why This Is a Board Issue

Boards are not expected to code models, evaluate vector databases, or decide which prompt library a business unit should use. They are expected to oversee risk, culture, controls, and management accountability. AI now sits squarely in that lane.

Once AI touches business processes, it can affect decision rights, data usage, customer interactions, employee treatment, financial reporting inputs, records management, and reputation. That means the board does not need to manage the machinery, but it must ensure a management system is in place for it.

This is where compliance can bring real value. Ethisphere’s latest work on the Ethics Premium makes a useful point for governance professionals: leading programs improve board reporting practices, including more frequent meetings with directors to ensure they receive the information needed for effective oversight, and they are also pushing documentation to be ready for AI-driven assistance so employees can find answers when they need them. In other words, mature governance is not static. It evolves as technology evolves.

That same report also reminds us that strong ethics and compliance systems are associated with higher returns, less downside, and faster recoveries, which is exactly the language boards understand when evaluating strategic risk and resilience.

So let us translate that lesson into the AI context. The board’s task is not to bless every shiny new tool. Its task is to ensure management has built an operating system for responsible AI use.

What a Board Should Do

The first thing a board should do is insist on a clear AI governance architecture. That means management should be able to answer basic questions cleanly and quickly. Who owns the enterprise AI strategy? Who approves high-risk use cases? Who validates controls before deployment? Who monitors incidents, exceptions, and drift? Who reports to the board? If five executives give five different answers, you do not have governance. You have a theater.

Second, the board should require a risk-based inventory of AI use cases. I am continually amazed at how many organizations start with policy language before they know where AI is actually being used. That is backwards. Boards should ask for a current inventory of internal, customer-facing, employee-facing, and vendor-enabled AI use cases. The inventory should distinguish between low-risk productivity tools and higher-risk uses involving sensitive data, regulated processes, legal judgments, employment decisions, or customer outcomes. If management cannot map the use cases, it cannot credibly manage the risk.

Third, the board should demand decision-use discipline. Not every AI output deserves the same level of trust. Some uses are advisory. Some are operational. Some may influence consequential business judgments. Boards should ask management where AI outputs are being relied upon, who reviews them, and what level of human oversight is required before action is taken. The issue is not whether humans are “in the loop” as a slogan. The issue is whether human review is meaningful, documented, and tied to the use case’s risk.

Fourth, the board should require intelligible reporting, not merely technical. Board oversight fails when management delivers either fluff or jargon. Directors need reporting that answers practical questions: What are our top AI use cases? Which ones are classified as high risk? What incidents or near misses have occurred? What controls were tested? What third parties are material to our AI stack? What changed this quarter? What needs escalation? Good board reporting turns AI from mystique into management.

That point is entirely consistent with what Ethisphere identifies in leading ethics and compliance programs: improved board reporting practices that provide directors with the information they need for effective oversight.

Where Compliance Officers Can Help the Board Most

This is where the CCO earns their seat at the table.

First, the compliance function can help management create the classification framework. Compliance professionals know how to tier risk, define escalation paths, and build governance around business reality. You have been doing it for years with third parties, gifts and entertainment, investigations, and training. AI is a new technology, but the governance muscle memory is familiar.

Second, compliance can help build the policy-to-practice bridge. A glossy AI principles statement is not governance. Governance is what happens when procurement uses approved clauses, HR knows what tools it can use, managers understand escalation triggers, training is tailored to real workflows, and documentation supports decision-making. Ethisphere’s report notes that best-in-class programs are investing in clear, compelling documentation and training approaches designed for actual employee use, not simply for formal compliance completion. That is precisely the model AI governance needs.

Third, compliance can help the board by translating operational signals into governance signals. A rejected deployment, a data-permission problem, a hallucinated output in a sensitive workflow, a vendor change notice, a policy exception, or a spike in employee questions may each seem isolated. They are not. They are governance indicators. The CCO can aggregate them into trend lines that the board can actually use.

Fourth, compliance can help define the cadence and content of board reporting. Directors do not need every technical detail. They do need a disciplined dashboard and escalation protocol. Compliance is often the right function to help standardize that process, because it lives at the intersection of risk, policy, training, speak-up culture, investigations, and controls.

The Operational Reality Boards Must Understand

One reason AI governance lags strategy is that AI adoption is not happening in one place. It is happening everywhere. That decentralization is what makes governance hard. The legal team may be reviewing one contract while a business leader is piloting another tool within budget. An employee may paste sensitive information into a system that was never intended to accept it. A vendor may quietly add AI functionality to an existing platform. A manager may begin relying on generated summaries as if they are verified facts. None of this requires malicious intent. It only requires speed, convenience, and a little ambiguity. Corporate history teaches that those ingredients are often enough.

Boards, therefore, need to understand a simple truth: AI risk is not only model risk. It is a workflow risk. It is a data risk. It is governance risk. It is a cultural risk. But culture matters here. Ethisphere found that nearly every honoree equips managers with toolkits and talk tracks to discuss ethical dilemmas with their teams, and 51% require managers to do so. That should be a flashing neon sign for AI governance. If managers are not talking with employees about responsible use, escalation expectations, and when not to trust the machine, the company is relying on hope as a control. Hope is not a control. It is a prayer.

Final Thoughts

When AI strategy outruns governance, the problem is not innovation. The problem is unmanaged innovation. Boards should not respond by slamming on the brakes. They should respond by insisting on lanes, guardrails, dashboards, and accountability.

For compliance officers, the opportunity is enormous. You can help the board ask better questions. You can help management build a governance operating system. You can help the business adopt AI faster, smarter, and more defensibly.

That is the larger point. Compliance is not there to suffocate strategy. Compliance is there to make the strategy sustainable.

Here are the questions I would leave you with:

  • Does your board receive meaningful AI oversight reporting, or only periodic reassurance?
  • Can your company identify its highest-risk AI use cases today, not next quarter?
  • If a director asked tomorrow who owns AI governance end-to-end, would the answer be immediate and credible?
  • If not, your AI strategy may already be outrunning your governance.
Categories
AI Today in 5

AI Today in 5: March 25, 2026, The AI Risk Handbook Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Why your AI factory will fail without compliance and security. (Forbes)
  2. Amazon introduces agentic AI for healthcare. (AHA)
  3. Cisco announces security tools for AI agents. (Yahoo! Finance)
  4. New regulatory mandates for finance risk assessments. (FinTechGlobal)
  5. AI risk handbook for finance. (FinTechGlobal)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Compliance Into the Weeds

Compliance into the Weeds: Balt and TradeStation: Lessons for the Compliance Professional

The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to explore it more fully. Looking for some hard-hitting insights on compliance? Look no further than Compliance into the Weeds! In this episode of Compliance into the Weeds, Tom Fox and Matt Kelly look at the Declination awarded to Balt SAS and the OFAC enforcement action involving TradeStation. 

First, they review a Corporate Enforcement Policy declination for French medical-equipment company BAL SAS and the company’s U.S. subsidiary after self-disclosing, cooperating and remediating misconduct involving a U.S. subsidiary executive and a Belgian consultant allegedly funneling about $600,000 in bribes to a French public hospital official using sham consulting agreements, invoices, and poor documentation; BAL disgorged about $1.21 million in profit on roughly $1.68 million in revenue and disclosed while its internal investigation was still ongoing, raising timing and high-margin red-flag issues.

Second, they cover OFAC’s $1.1 million settlement with TradeStation for accidentally disabling sanctions-screening controls for nearly a year, enabling hundreds of transactions from Iran, Syria, and Crimea; despite having layered tools on paper, IT changes and lapsed subscriptions undermined those controls, underscoring the need for ongoing monitoring, testing, and auditing.

 Key highlights:

  • Balt FCPA Case
  • Disclosure Timing
  • Profit Margin Red Flags
  • Controls and France Angle
  • TradeStation Overview
  • How Screening Failed
  • Monitoring and Accountability
  • Costs and OFAC Lessons

Resources:

Matt in Radical Compliance

Tom in the FCPA Compliance Report

Tom  

Instagram

Facebook

YouTube

Twitter

LinkedIn

A multi-award-winning podcast, Compliance into the Weeds was most recently honored as one of the Top 25 Regulatory Compliance Podcasts, a Top 10 Business Law Podcast, and a Top 12 Risk Management Podcast. Compliance into the Weeds has been conferred a Davey, a Communicator Award, and a W3 Award, all for podcast excellence.

Categories
Daily Compliance News

Daily Compliance News: March 25, 2026, The Deadlocked Jury Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • FirstEnergy bribery jury deadlocked. (Ohio Capital Journal)
  • Trades bet over $500MM on oil immediately before the Trump announcement. (FT)
  • EU nearing decision on Google and EU competition law violation. (WSJ)
  • Lawmakers move to ban sports betting in prediction markets. (WSJ)
Categories
Great Women in Compliance

Great Women in Compliance: Pretty in Pink: Inside the World of Women Fraudsters

Listen in to hear from two renowned fraud experts, Kelly Paxton & Professor Kelly Richmond Pope, talk about pink collar crime:

  • What is it
  • Why it happens
  • How to spot it and
  • What we can do to stop it 

With our guests, Sarah Hadden and Ellen M. Hunt, explore whether there are differences between women and men fraudsters, what tools are available or may become available in the future to detect fraud before it happens, and whether biases or preconceived stereotypes hinder our ability to prevent and identify pink-collar crime.  

Hear from two of the most experienced fraud professionals by tuning in on your favorite podcast platform, on Corporate Compliance Insights, and the Compliance Podcast Network.