Categories
Blog

The Starliner, Culture and Compliance: Leadership Lessons from a NASA Investigation Report

Corporate compliance professionals spend a lot of time talking about controls, training, third parties, and investigations. Yet the hard truth is that the most important control environment sits above all of that: leadership behavior and the culture it creates. That is why this NASA investigation report on the Boeing CST-100 Starliner Crewed Flight Test (CFT) is such a useful case study. It is a technical report, to be sure. But it is also a cultural, leadership, and governance report. NASA’s bottom line is unambiguous: technical excellence and safety require transparent communication and clear roles and responsibilities, not as slogans, but as operating requirements that must be institutionalized so safety is never compromised in pursuit of schedule or cost.

If you are a Chief Compliance Officer, General Counsel, or business leader, you should read this report the way you read an enforcement action. Not to gawk. Not to assign blame. But to harvest lessons for your own organization before you have your own high-visibility close call.

The incident(s) that led to the report

The CFT mission launched June 5, 2024, as a pivotal step toward certifying Starliner to transport astronauts to the International Space Station. It was planned as an 8-to-14-day mission but was extended to 93 days after significant propulsion system anomalies emerged. Ultimately, the Starliner capsule returned uncrewed, while astronauts Barry “Butch” Wilmore and Sunita “Suni” Williams returned aboard SpaceX’s Crew-9 Dragon in March 2025. In February 2025, NASA chartered a Program Investigation Team (PIT) to examine the technical, organizational, and cultural factors contributing to the anomalies.

The report describes four major hardware anomaly areas, including Service Module RCS thruster fail-offs that temporarily caused a loss of 6 Degrees of Freedom control during ISS rendezvous and required in-situ troubleshooting to recover enough capability to dock, a Crew Module thruster failure during descent that reduced fault tolerance, and helium manifold leaks where seven of eight Service Module helium manifolds leaked during the mission. The PIT further determined that the 6DOF loss during rendezvous met criteria for a Type A mishap (or at least a high-visibility close call), underscoring how close the program came to a very different ending.

That is the “what.” For compliance professionals, the “so what” is that NASA did not treat this as a purely engineering problem. It treated it as an integrated system failure, in which culture and leadership either reduce risk or magnify it.

Lesson 1: Decision authority is culture, not paperwork

One of the report’s clearest threads is that fragmented roles and responsibilities delayed decision-making and eroded confidence. In the compliance world, unclear decision rights become the breeding ground for “informal governance”: private conversations, end-runs around committees, and decisions that are never fully documented. Over time, that becomes a shadow-control environment that your policies cannot touch.

Compliance action steps

  • Define decision rights for the riskiest calls (high-risk third parties, market entry, major remediation, critical incidents).
  • Require a short, written record of: facts reviewed, options considered, dissent captured, decision made, and owner accountable.
  • Separate “recommendation authority” from “approval authority” so everyone knows where they sit.

Lesson 2: Transparency is a control, and selective data sharing destroys trust

The report explicitly flags that the lack of data access fueled concerns about selective information sharing. Interviewees described frustration that information could be filtered, selectively chosen, or sanitized, which eroded confidence in the process and people. It also notes reports of questions being labeled “too detailed” or “out of scope” without mechanisms to ensure concerns were addressed. That is the compliance danger zone. When teams believe the narrative matters more than the data, they stop escalating early. They start documenting defensively. They seek safety in silence.

Compliance action steps

  • Build “open data” expectations into your incident response and investigative protocols.
  • Create a defined pathway for technical or subject-matter dissent to be logged, reviewed, and dispositioned.
  • Treat meeting notes and decisions as governed records, not optional artifacts.

Lesson 3: Risk acceptance without rigor becomes “unexplained anomaly tolerance”

NASA calls out “anomaly resolution discipline” and warns that repeated acceptance of unexplained anomalies without root cause can lead to recurrence. That single lesson belongs on a poster in every compliance office. In corporate terms, “unexplained anomalies” are recurring control exceptions, repeat hotline themes, repeated third-party red flags, and audit findings that are “managed” rather than fixed. If leadership normalizes that pattern, it teaches the organization that closure is more important than correction.

Compliance action steps

  • Require root cause analysis for repeat issues, not just incident closure.
  • Set escalation thresholds for “repeat with no root cause” findings.
  • Audit remediation quality, not only remediation completion.

Lesson 4: Partnerships fail when “shared accountability” is not operationalized

The report emphasizes that shared accountability in the commercial model was inconsistently understood and applied. It also notes that historical relationships and private conversations outside formal forums created perceptions of blurred boundaries, favoritism, and lack of objectivity, whether or not those perceptions were accurate. Compliance teams have seen this movie. Think distributors, joint ventures, outsourced compliance support, and major technology partners. If accountability is shared in theory but siloed in practice, something will fall through the cracks. Usually, it falls right into your lap when regulators arrive.

Compliance action steps

  • Define “shared accountability” in contracts, governance charters, and escalation protocols.
  • Ensure independence and objectivity are protected by design, not by personality.
  • Create joint forums where data is shared broadly, dissent is recorded, and decisions are made openly.

Lesson 5: Burnout is a risk factor, and meeting chaos is a governance failure

The report’s recommendations recognize the operational reality: high-pressure environments can degrade decision quality. It calls for “pulse checks,” rotation of high-pressure responsibilities, contingency staffing, and time protection for deep work to proactively address burnout and improve decision-making under mission conditions. Compliance professionals should take that to heart. Crisis cadence is sometimes unavoidable. Permanent crisis cadence is a leadership choice. And it carries predictable consequences: shortcuts, missed details, weakened documentation, and poor judgment.

Compliance action steps

  • Build surge staffing plans for investigations and incident response.
  • Rotate incident commander roles when events extend beyond days.
  • Protect time for analysis, not just meetings and status updates.

Lesson 6: Accountability must be visible, not performative

NASA does not bury the human dimension. The report contains leadership recommendations to speak openly with the joint team about leadership accountability, including concurrence with the report and reclassification as a mishap, and to hold a leadership-led stand-down day focused on reflection, accountability concerns, and rebuilding trust. For corporate leaders, this is where trust is won or lost after a crisis. Employees can tolerate a hard outcome. They struggle to tolerate spin. If your organization communicates externally with confidence but internally with vagueness, your culture learns the wrong lesson: optics first, truth second.

Compliance action steps

  • After a major incident, publish an internal accountability and remediation plan with owners and timelines.
  • Provide regular updates on what has been completed, what is delayed, and why.
  • Make it safe for the workforce to ask questions in interactive forums, as NASA recommends.

Lesson 7: Trust repair requires a plan, not a pep talk

One of the most useful artifacts in the report is a sample Organizational Trust Plan. It sets a goal to rebuild trust by establishing clear expectations, open accountability, and shared commitment to safety and mission success. It includes objectives around transparent communication, acknowledging past challenges, reinforcing shared values, and structured engagement. It then lays out action steps: leadership engagement, facilitated sessions, outward expressions of accountability, teamwide rollout, training and coaching, and communication through a written plan and regular updates.

That is exactly the kind of operational discipline compliance leaders should bring to culture work. Culture does not change because someone gives a speech. Culture changes when the organization changes how it makes decisions, treats dissent, and follows through.

Five key takeaways for the compliance professional

  1. Clarify decision rights before the crisis. Ambiguity becomes politics under pressure.
  2. Make transparency non-negotiable. Perceived filtering of data destroys credibility.
  3. Do not normalize unexplained anomalies. Repeat issues without a root cause are future failures.
  4. Operationalize shared accountability with partners. Otherwise, it is a slogan.
  5. Rebuild trust with a written plan and visible accountability. Trust repair is a managed process.

In the end, the Starliner lesson for compliance is simple: controls matter, but culture decides whether controls work when it counts. If leadership cannot run disagreements well, cannot share data broadly, and cannot demonstrate accountability after the fact, the best-written compliance program in the world will fail the moment the pressure rises.

Categories
Blog

5 Strategic Board Playbooks for AI Risk (and a Bootcamp)

Artificial intelligence is no longer a future-state technology risk. It is a current-state governance issue. If AI is being deployed inside governance, risk, and compliance functions, then it is already shaping how your company detects misconduct, prioritizes investigations, manages regulatory obligations, and measures program effectiveness. That makes AI risk a board agenda item, not a management footnote.

In an innovation-forward organization, the goal is not to slow AI adoption. The goal is to professionalize it. Board of Directors and Chief Compliance Officers (CCOs) should approach AI the way they approached cybersecurity a decade ago: move it from “interesting updates” to a structured reporting cadence with measurable controls, clear accountability, and director education that raises the collective literacy of the room.

Today, we consider 5 strategic playbooks designed for a Board of Directors and a CCO operating in an industry-agnostic environment, building AI in-house, without a model registry yet, and with a cross-functional AI governance committee chaired and owned by Compliance. The program must also work across multiple regulatory regimes, including the DOJ Evaluation of Corporate Compliance Programs (ECCP), the EU AI Act, and a growing patchwork of state laws. We end with a proposal for a Board of Directors Boot Camp on their responsibilities to oversee AI in their organization.

Playbook 1: Put AI Risk on the Calendar, Not on the Wish List

If AI risk is always “important,” it becomes perpetually postponed. The first play is procedural: create a standing quarterly agenda item with a consistent structure.

Quarterly board agenda structure (20–30 minutes):

  1. What changed since last quarter? Items such as new use cases, material model changes, new regulations, and major control exceptions.
  2. AI full Risk Dashboard, with 8–10 board KPIs, trends, and thresholds.
  3. Top risks and mitigations, including three headline risks with actions, owners, and dates.
  4. Assurance and testing, which would include internal audit coverage, red-teaming results, and remediation progress.
  5. Decisions required include policy approvals, risk appetite adjustments, and resourcing.

This cadence does two things. First, it forces repeatability. Second, it creates institutional memory. Boards govern better when they can compare quarter-over-quarter progress, not when they receive one-off deep dives that cannot be benchmarked.

Playbook 2: Build the AI Governance Operating Model Around Compliance Ownership

In your design, Compliance owns AI governance and its use throughout the organization, supported by a cross-functional AI governance committee. That is a strong model, but only if it is explicit about responsibilities.

Three lines of accountability:

  • Compliance (Owner): policy, risk framework, controls, training, and board reporting.
  • AI Governance Committee (Integrator): cross-functional prioritization, approvals, escalation, and issue resolution.
  • Build Teams (Operators): documentation, testing, change control, and implementation evidence.

Boards should ask one simple question each quarter: Who is accountable for AI governance, and how do we know it is working? If the answer is “everyone,” then the real answer is “no one.” Your model makes the answer clear: Compliance owns it, and the committee operationalizes it.

Playbook 3: Create the AI Registry Before You Argue About Controls

You have no model registry yet. That is the first operational gap to close, because you cannot govern what you cannot inventory. In a GRC context, this is not a “nice to have.” Without an inventory, you cannot prove coverage, you cannot scope an audit, you cannot define reporting, and you cannot explain to regulators how you know where AI is influencing decisions.

Minimum viable AI registry fields (start simple):

  • Use case name and business owner;
  • Purpose and decision impact (advisory vs. automated);
  • Data sources and data sensitivity classification;
  • Model type and version, with change log;
  • Key risks (bias, privacy, explainability, security, reliability);
  • Controls mapped to the risk (testing, monitoring, approvals);
  • Deployment status (pilot, production, retired); and
  • Incident history and open issues.

Boards do not need the registry details. They need the coverage metric and the assurance that the registry is complete enough to support governance.

Playbook 4: Align to the ECCP, EU AI Act, and State Laws Without Creating a Paper Program

Many organizations make a predictable mistake: they respond to multiple frameworks by producing multiple binders. That creates activity, not effectiveness. A better approach is to use a single control architecture to map to multiple requirements. The board should see one integrated story:

  • DOJ ECCP lens: effectiveness, testing, continuous improvement, accountability, and resourcing;
  • EU AI Act lens: risk classification, transparency, human oversight, quality management, and post-market monitoring; and
  • State law lens: privacy, consumer protection concepts, discrimination prohibitions, and notice requirements where applicable

This mapping becomes powerful when it ties back to the board dashboard. The board is not there to read statutes. The board is there to govern outcomes.

Playbook 5: Use a Board Dashboard That Measures Coverage, Control Health, and Outcomes

You asked for a combined dashboard and narrative with 8–10 KPIs. Here is a board-level set designed for AI in governance, risk, and compliance functions, with in-house build, internal audit, and red teaming for assurance.

Board AI Governance KPIs (8–10)

1. AI Inventory Coverage Rate

Percentage of AI use cases captured in the registry versus estimated footprint.

2. Risk Classification Completion Rate

Percentage of registered use cases risk-classified (EU AI Act style tiers or internal tiers).

3. Pre-Deployment Review Pass Rate

Percentage of deployments that cleared required testing and approvals on first submission.

4. Model Change Control Compliance

Percentage of model changes executed with documented approvals, testing evidence, and rollback plans.

5. Explainability and Documentation Score

Percentage of in-scope use cases with complete documentation, rationale, and user guidance.

6. Monitoring Coverage

Percentage of production use cases with active monitoring for drift, anomalies, and performance degradation.

7. Issue Closure Velocity

Median days to close AI governance issues, by severity.

8. Internal Audit Coverage and Findings Trend

Number of audits completed, rating distribution, repeat findings, and remediation status.

9. Red Team Findings and Remediation Rate

Number of material vulnerabilities identified and percentage remediated within the target time.

10. Escalations and Incident Rate

Number of AI-related incidents or escalations (including near-misses), with severity and lessons learned.

These KPIs do not require vendor controls and align with an in-house build model. They also support both board oversight and compliance management.

AI Director Boot Camp

Your board has a medium level of literacy and needs a boot camp. I agree. Directors do not need to become engineers. They need a common vocabulary and a governance frame. The recommended boot camp design is one-half day, making it highly practical. It should include the following.

  1. AI in the company’s operating model. This means where it touches decisions, risk, and compliance outcomes.
  2. AI risk taxonomy, such as bias, privacy, security, explainability, reliability, third-party, and later.
  3. Regulatory landscape overview, including a variety of laws and regulatory approaches, including the DOJ ECCP approach to effectiveness, the EU AI Act risk framing, and several state law themes approaches.
  4. Governance model walkthrough to ensure the BOD understands the registry, risk classification, controls, monitoring, and escalation.
  5. Tabletop exercises, such as an AI incident in a GRC context with false negatives in monitoring or biased triage.
  6. Board oversight duties. Teach the BOD how they can meet their obligations, including which questions to ask quarterly, which thresholds trigger escalation, and similar insights.

The deliverable from the boot camp should be a one-page “Director AI Oversight Guide” with the KPIs, escalation triggers, and the quarterly agenda structure.

The Bottom Line for Boards and CCOs

This is the moment to treat AI risk like a board-governed discipline. The organizations that get it right will not be the ones with the longest AI policy. They will be the ones with the clearest operating model, the most reliable reporting cadence, and the strongest evidence of control effectiveness.

If Compliance owns AI governance, then Compliance must also own the proof. That proof is delivered through a registry, a quarterly board agenda item, a balanced KPI dashboard, and assurance through internal audit and red teaming. Add a director boot camp to create shared understanding, and you have the beginnings of a program that is innovation-forward and regulator-ready.

That is the strategic playbook: not fear, not hype, but governance.

Categories
Blog

When Your AI Chat Becomes Exhibit A: What United States v. Heppner Means for Compliance Professionals

There are court rulings that quietly shape doctrine, and others that detonate assumptions. The recent decision of Judge Jed Rakoff from the Southern District of New York in United States v. Heppner falls into the latter category. In a February 10, 2026, ruling,  the Court made clear that the attorney-client privilege or the work-product doctrine did not protect materials generated through a third-party generative AI platform. In plain English, what a defendant typed into a public AI system was discoverable.

For compliance professionals, this is not a narrow litigation footnote. It is a flashing red warning light. The era of casual AI experimentation inside corporations is over. Governance now must catch up with adoption. Today, we will consider the Court’s ruling and why it matters to a Chief Compliance Officer.

The Court’s Core Holding

The defendant in Heppner had used a third-party generative AI tool to draft and refine materials that were later shared with counsel. When prosecutors sought production, the defense argued that these materials were protected by privilege and work-product protections. The court disagreed.

The reasoning was straightforward and, frankly, predictable:

  • The AI tool was not an attorney.
  • The terms of service did not guarantee confidentiality and allowed retention or potential disclosure of inputs.
  • The materials were not prepared at the direction of counsel for the purpose of obtaining legal advice.
  • Simply sending AI-generated drafts to counsel after the fact did not, by itself, retroactively cloak them in privilege.

This is a fundamental point: privilege attaches to communications made in confidence for the purpose of seeking legal advice. When an employee enters sensitive facts into a third-party AI platform that disclaims confidentiality, that “confidence” is at best questionable. When those drafts are created independently of counsel’s direction, work-product arguments grow thin. The court did not create a new doctrine. It applied existing principles to new technology. That is precisely why this ruling is so important.

The Illusion of Confidentiality

Many business users treat AI platforms like a digital notebook. They assume that because the interaction occurs on a screen and feels private, it is private. That assumption is dangerous. Public and consumer AI platforms often reserve the right to store, analyze, or use inputs for service improvement. Even when vendors promise limited retention, those commitments may not meet the strict confidentiality standards necessary to preserve privilege. From a legal perspective, once you introduce a third party without adequate confidentiality protections, you risk waiving your rights.

The compliance lesson is blunt: generative AI is not your lawyer, and it is not your secure internal memo system. This is where governance intersects with culture. If employees are entering investigative summaries, draft responses to regulators, internal audit findings, or potential misconduct narratives into public AI tools, you are manufacturing discoverable evidence. That is not a hypothetical risk. That is now a litigated reality.

Why This Is a Board-Level Issue

The Department of Justice has made clear through the Evaluation of Corporate Compliance Programs (ECCP) that companies must identify and manage emerging risks. Artificial intelligence is no longer emerging. It is embedded in operations, marketing, finance, and legal workflows. The Heppner ruling converts AI usage from a technology convenience into a legal risk category. Boards of Directors should be asking:

  • Do we have an inventory of AI tools used across the enterprise?
  • Are employees permitted to input confidential, regulated, or legally sensitive information into third-party platforms?
  • Have we reviewed the vendor’s terms of service regarding confidentiality, retention, and data ownership?
  • Are legal and compliance functions involved in approving AI deployments?

If the answer to any of these questions is uncertain, there is a governance gap. AI governance is no longer solely about bias, explainability, or regulatory compliance. It is also about preserving privilege, managing litigation risk, and managing evidence.

Privilege cannot Be Recreated After the fact.

One of the most significant aspects of the ruling is the rejection of “retroactive privilege.” Sending AI-generated content to counsel after it is created does not transform it into protected communication. This matters for compliance investigations. Consider the following scenario:

An internal report of potential misconduct surfaces. An employee uses a public AI tool to summarize the facts and generate possible legal arguments before reaching out to in-house counsel. That summary now exists outside any protected legal channel. The vendor may retain it. It may be discoverable.

By the time counsel becomes involved, the privilege damage may already be done. The message for compliance teams is clear: legal engagement must precede, or at least direct, sensitive analysis, not follow it.

Work Product Is Not a Safety Net

Some may argue that AI-assisted drafting in anticipation of litigation should fall under the work-product doctrine. The court in Heppner was not persuaded. Work-product protection generally applies to materials prepared by or for an attorney in anticipation of litigation. When individuals independently generate content using AI tools without counsel’s direction, that protection is far from guaranteed. Compliance professionals should not assume that labeling a document “prepared in anticipation of litigation” will insulate AI-generated material. Courts will look at substance over form.

Practical Steps for Compliance Leaders

This ruling demands operational response from every CCO. Here are some steps every compliance program should consider.

1. Treat Third-Party AI as Non-Confidential by Default

Unless you have a contractual, enterprise-level arrangement with robust confidentiality provisions and clear data controls, assume that information entered into a third-party AI platform is not protected. This default posture should be reflected in policy language.

2. Update Acceptable Use Policies

Your code of conduct and IT policies should explicitly address the use of generative AI. Prohibit the entry of:

  • Privileged communications.
  • Investigation details.
  • Personally identifiable information.
  • Trade secrets.
  • Sensitive regulatory communications.

Policy must move from general warnings to specific examples.

3. Involve Legal in AI Governance

AI procurement should not be a purely IT function. Legal and compliance must review vendor terms, especially around:

  • Data retention.
  • Subprocessor use.
  • Confidentiality obligations.
  • Audit rights.
  • Breach notification.

If you cannot articulate how your AI vendor protects inputs, you cannot defend privilege claims.

4. Implement Training That Reflects Real Risk

Annual compliance training should now include explicit guidance on AI usage. Employees should understand that entering confidential information into public AI tools can waive privilege and render it discoverable. Training should include practical scenarios. The objective is behavioral change, not abstract awareness.

5. Establish Secure AI Environments for Legal Work

If your organization intends to use AI in legal or investigative contexts, consider enterprise solutions that:

  • Operate within your controlled environment.
  • Restrict data sharing.
  • Provide contractual confidentiality.
  • Maintain clear audit logs.

Even then, legal oversight is essential. Secure does not automatically mean privileged.h

6. Align with Litigation Hold Procedures

AI interaction logs may constitute discoverable material. Ensure that your litigation hold processes account for AI-generated content. If your organization logs prompts and outputs, those logs may fall within the scope of preservation obligations. Ignoring this dimension creates spoliation risk.

The Cultural Dimension

Technology adoption inside companies often outruns governance. Employees experiment. Business units optimize. Productivity improves. Compliance arrives later. That sequencing is no longer sustainable. The Heppner ruling should catalyze a shift from reactive to proactive governance. AI usage must be mapped, risk-ranked, and monitored, just as third-party intermediaries, high-risk markets, and financial controls are. If your risk assessment does not explicitly include generative AI, it is incomplete.

Connecting to the DOJ’s Expectations

The DOJ has repeatedly emphasized dynamic risk assessment. Artificial intelligence now clearly falls within the scope of corporate compliance evaluation. Prosecutors will not be sympathetic to arguments that “everyone was using it” or that policies were silent. They will ask:

  • Did the company identify AI as a risk area?
  • Did it implement controls?
  • Did it train employees?
  • Did it monitor usage?
  • Did it respond to incidents?

The answers to those questions will influence charging decisions, resolutions, and penalty calculations.

A Final Word: Convenience Versus Control

Generative AI is transformative. It enhances drafting, analysis, and research. It can elevate compliance operations if deployed thoughtfully. However, convenience without control is exposure. The lesson of United States v. Heppner is not that AI should be avoided. It is that AI must be governed with the same rigor as any other high-impact enterprise tool.

Privilege is fragile. Once waived, it cannot be restored. In a world where a chat prompt can become an exhibit, compliance professionals must lead the charge in redefining responsible AI use. If you are a chief compliance officer, this is your moment. Update your policies. Engage your board. Coordinate with legal and IT. Embed AI governance into your compliance framework. Because the next time an AI conversation surfaces in discovery, you do not want to explain why your program treated it like a harmless experiment.

Categories
Blog

AI and Work Intensification – The Compliance Response

There is a comforting myth circulating in corporate hallways and boardrooms: if we deploy AI across governance, risk, and compliance, the work will shrink. Investigations will move faster. Monitoring will get smarter. Policies will draft themselves. Third-party diligence will become push-button. The compliance function will finally “do more with less.” That myth was challenged in a recent Harvard Business Review article, “AI Doesn’t Reduce Work—It Intensifies It by Aruna Ranganathan and Xingqi Maggie Ye.

The authors believe that what happens is work intensification. AI expands throughput, increases expectations, and generates more outputs that still require human judgment, verification, and accountability. Instead of fewer tasks, you get more tasks. Instead of simpler work, you get faster cycles, more iterations, and new forms of quality risk. For the Chief Compliance Officer (CCO) leading AI governance, this is not a side effect. It is a core operating model issue.

If compliance owns AI governance across the enterprise, compliance must also own the discipline of how humans and AI work together. I call that discipline an AI practice standard, management guidance that sets expectations for pace, quality, verification, escalation, and sustainable workload.

Today, we consider how to consider this issue as a compliance operating model challenge across all GRC workflows: policy management, investigations, hotline intake, monitoring and surveillance, third-party due diligence, regulatory change management, audit planning, training, and reporting. The tone is cautionary because the risk is real: a compliance function that mistakes AI output volume for compliance effectiveness.

The Compliance Operating Model Problem: More Output, More Review, More Risk

Compliance work is not manufacturing. It is judgment work. It requires discretion, context, and defensible decisions. AI can accelerate inputs and draft outputs, but it does not accept responsibility. The CCO does. The business does. The board does. When AI enters GRC workflows, it tends to create four pressure points:

1. Compression of timelines. If a draft can be produced in five minutes, someone will ask why it cannot be finalized in five more.

2. Explosion of options. AI generates multiple versions, scenarios, and recommendations, which expands decision load and review cycles.

3. Higher volume of “signals.” AI-enabled monitoring produces more alerts, more pattern matches, and more anomalies. Much will be noise. All require triage.

4. Illusion of completion. Teams begin to treat a plausible AI answer as a finished work product. That is how quality defects are born.

The result is a compliance function that looks “faster” while becoming more fragile. Burnout rises. Rework increases. Errors creep into documentation. Controls become less reliable because the humans operating them are overwhelmed by the sheer volume AI makes possible.

All this means the question for the CCO is not, “How do we roll out AI?” The question is, “How do we govern the human work that AI intensifies?”

Five KPIs for Work Intensification Risk

Next, we consider five KPIs specifically designed to measure work intensification. These are board-credible, compliance-owned, and operationally measurable.

1. After-Hours Compliance Work Index

Percentage of compliance work activity occurring outside standard business hours (for example, 6 p.m. to 7 a.m.), measured across key systems (case management, GRC platform activity logs, email metadata, collaboration tool usage). This matters because AI compresses timelines and pushes work into nights and weekends. This index serves as an early warning for burnout and quality failures.

2. AI Rework Rate

Percentage of AI-assisted work products requiring material revision after human review (policies, investigation summaries, risk narratives, diligence reports). This matters because

if AI increases speed but doubles rework, you are not gaining productivity. You are shifting effort downstream.

3. Cycle Time Compression vs. Quality Defect Ratio

Track cycle time reductions alongside quality defects (corrections, escalations, documentation gaps, audit findings). You can express this KPI as Cycle Time Improvement / Defect Increase.

This matters because faster is not better if defects rise. This ratio keeps leadership honest.

4. Alert-to-Action Conversion Rate

Percentage of AI-generated alerts that result in a confirmed issue, investigation, remediation, or control enhancement. This matters because AI intensifies monitoring. This KPI exposes whether you are drowning in noise or generating actionable intelligence.

5. Burnout Signal Composite

A quarterly composite score built from pulse surveys such as fatigue, workload, autonomy, attrition in compliance roles, sick leave usage trends, and employee assistance program utilization patterns. This matters because compliance effectiveness depends on people. Burnout is a control failure risk.

These five metrics give the CCO and board a shared view of whether AI is improving the compliance function or simply accelerating it toward exhaustion.

How to Measure the Leading Indicators

You requested practical recommendations for measuring after-hours work, cycle time, quality defects, and burnout indicators. Here is a measurement approach that is realistic and defensible.

After-Hours Work

  • Use system log data from the case management, GRC, and document management platforms to track timestamped activity.
  • Supplement with email and collaboration metadata to measure volume outside standard hours.
  • Report trends by team and workflow, not individuals. This is about operating model health, not surveillance.

Cycle Time

  • Establish “start” and “stop” definitions for each workflow:
    • Investigations: intake date to closure date
    • Due diligence: request date to clearance date
    • Policy updates: drafting starts from the published version
    • Regulatory change: trigger identification to implementation
  • Track AI-assisted versus non-AI-assisted cycle times to isolate the impact.

Quality Defects

  • Define defects as “items requiring material correction after initial completion,” including:
    • Incomplete documentation
    • Wrong risk rating or missing rationale
    • Incorrect regulatory mapping
    • Reopened cases due to insufficient analysis
    • Audit findings tied to workflow execution
  • Capture defects through QA sampling, supervisor review logs, audit results, and post-incident reviews.

Burnout Indicators

  • Run a quarterly pulse survey with 5–7 questions on workload, pace, clarity, and ability to disconnect.
  • Track voluntary attrition and vacancy duration for compliance roles.
  • Include aggregate HR indicators such as overtime trends or sick leave usage, where available.
  • Use a composite score and trend it. The trend line is what matters.

The key is to build instrumentation without creating a culture of monitoring employees. Your goal is not to watch people. Your goal is to protect the control environment.

Adopt an Enterprise AI Practice Standard Now

For an innovation-forward company, the right move is not to slow down. The right move is to govern how you speed up. Your call to action is simple and strong: to adopt an enterprise AI practice standard as management guidance, owned by Compliance, implemented across all GRC workflows, measured by five work-intensification KPIs, and tested by internal audit and red teaming.

If you do that, you gain three things immediately:

1. A sustainable operating model

2. Defensible governance for regulators and boards

3. A compliance function that remains credible under pressure

AI can make compliance better. But only if the humans who run compliance can still breathe.

Categories
Blog

From Principle to Proof: Operationalizing AI Governance Through the ECCP and NIST

Artificial intelligence governance has officially crossed the threshold from theory to expectation. The Department of Justice has not issued a standalone “AI rulebook,” but it has provided a framework for compliance professionals to consider the issue: the 2024 Evaluation of Corporate Compliance Programs (ECCP). In this version of the ECCP, the DOJ laid out guidance that any technology capable of creating material business risk must be governed, monitored, and improved like any other compliance risk. That includes artificial intelligence.

Too many organizations still treat AI governance as an ethics exercise, a technical problem, or a future concern. That posture is not defensible. The DOJ does not ask whether your program is fashionable or aspirational. It asks three very old-fashioned questions: Is your compliance program well designed? Is it applied in good faith? Does it work in practice? Those questions apply with full force to AI.

In this post, I want to move the discussion from abstract frameworks to operational reality. I will show how compliance professionals can use the ECCP to structure AI governance, select board-grade KPIs, and demonstrate effectiveness in a way regulators understand. I will also show how the NIST AI Risk Management Framework (NIST Framework) fits neatly underneath this structure as an operating model, not a competing philosophy.

AI Governance Is Already an ECCP Issue

The DOJ has repeatedly emphasized that compliance programs must evolve as business risks evolve. Artificial intelligence is not a future risk. It is already embedded in pricing, hiring, credit decisions, customer interactions, fraud detection, and third-party screening. If an AI model can influence revenue, customer outcomes, or regulatory exposure, it is a compliance risk. Period.

The ECCP does not require companies to eliminate risk. It requires them to identify, assess, manage, and learn from it. AI governance, therefore, belongs squarely inside the compliance program, not off to the side in an innovation lab or technology committee.

The ECCP as an AI Governance Blueprint

The power of the ECCP is its simplicity. Every enforcement action ultimately traces back to the same three questions. Let us apply them directly to AI.

Is the Program Well Designed?

Design begins with risk assessment. If your organization cannot answer a basic question such as “What AI systems do we have, who owns them, and what decisions they influence,” you do not have a program. You have hope. A well-designed AI compliance program starts with an AI asset inventory that identifies models, tools, vendors, and use cases. Each asset must be risk-classified based on business impact, regulatory exposure, and potential harm.

Board-level KPIs here are coverage metrics. How many AI assets have been identified? What percentage has been risk-classified? How many high-impact models have completed an impact assessment before deployment? If your dashboard does not show near-full coverage, the design is incomplete.

Policies and procedures come next. The DOJ does not care how many policies you have. It cares whether they provide clear guidance for real decisions. AI policies should cover the full lifecycle, from design and data sourcing through deployment, monitoring, and retirement. A practical KPI is policy coverage. What percentage of AI assets operate under current, approved procedures? How often are those procedures refreshed? Annual updates are a reasonable baseline in a rapidly changing risk environment.

Is the Program Applied Earnestly and in Good Faith?

Good faith is demonstrated through action, not intent. Training is a central indicator. The DOJ expects role-based training tailored to actual risk. A generic AI awareness course does not meet this standard. Developers, model owners, compliance reviewers, and business leaders all require different training. Completion rates matter, but so does comprehension. Measuring post-training proficiency improvement is one of the clearest signals that training is more than a box-checking exercise.

Third-party risk management is another critical area. Many organizations rely on external models, data providers, or AI-enabled vendors. If you do not understand how those tools are built, governed, and updated, you are importing risk without controls. Strong programs use standardized AI diligence questionnaires, assign assurance scores, and require contractual safeguards for high-risk vendors. A board-ready KPI here is the percentage of high-risk AI vendors subject to enhanced diligence and contractual controls.

Mergers and acquisitions deserve special attention. AI risk does not wait for post-close integration. The DOJ has been explicit that pre-acquisition diligence matters. A defensible KPI is simple and unforgiving. 100% of acquisition targets with material AI usage must undergo AI due diligence before closing. Anything less invites inherited risk.

Does the Program Work in Practice?

This is where many programs fail. Paper controls do not impress regulators. Outcomes do. Incident reporting is a critical signal. A low number of reported AI issues may indicate fear, confusion, or a lack of safety rather than safety concerns. What matters is whether issues are identified, investigated, and resolved promptly. Mean time to investigate is a powerful metric. If AI-related concerns take months to resolve, the program is not working. Clear escalation paths, defined investigation playbooks, and documented root cause analysis are essential.

Continuous monitoring is equally important. High-risk AI systems must be monitored for performance drift, data changes, and unintended outcomes. The DOJ expects companies to use data analytics to test whether controls are functioning. KPIs here include validation pass rates before deployment, drift-detection coverage for critical models, and corrective action closure rates. These are not technical vanity metrics. They are evidence of effectiveness.

Where NIST Fits and Why It Matters

The NIST AI Risk Management Framework does not compete with the ECCP. It operationalizes it. The ECCP tells you what regulators expect. NIST helps you implement those expectations across governance, mapping, measurement, and management. For example, ECCP risk assessment aligns with NIST’s mapping function. ECCP’s continuous improvement aligns with NIST’s measurement and management functions. Using NIST terminology creates a shared language across compliance, legal, security, and data science teams. That shared language is governance in action.

Reporting AI Risk to the Board

Boards do not want technical detail. They want assurance. The most effective AI governance dashboards focus on a small set of indicators that answer the DOJ’s three questions: coverage, quality, responsiveness, and learning. Examples include the percentage of AI assets risk-classified, validation pass rates, investigation cycle times, and corrective action closure rates. When these metrics move in the right direction, they tell a credible story of control. More importantly, they show that compliance is not reacting to AI. It is governing it.

Five Key Takeaways for Compliance Professionals

  1. AI as Risk. Artificial intelligence is already within the scope of the ECCP. If AI can influence business outcomes, it must be governed like any other compliance risk.
  2. Risk Management Program. A well-designed AI compliance program begins with complete asset identification and risk classification. Coverage metrics are the first signal regulators will examine.
  3. Implementation. Good faith implementation is demonstrated through role-based training, disciplined third-party oversight, and pre-acquisition AI diligence. Intent without execution does not count.
  4. Outcomes, not Inputs. Effectiveness is proven through outcomes. Investigation speed, monitoring coverage, and corrective action closure rates matter more than policy volume.
  5. Complementary. The NIST Framework complements the ECCP by providing an operating model that compliance, legal, and technical teams can share. Together, they turn principles into proof.

Final Thoughts

AI governance is not about predicting the future. It is about demonstrating discipline in the present. The DOJ is not asking compliance professionals to become data scientists. It is asking us to do what they have always done well: identify risk, establish controls, test effectiveness, and improve continuously. The ECCP already gives you the framework. The only question is applying it.

Categories
From the Editor's Desk

From the Editor’s Desk – Aaron Nicodemus on the CW AI Conference Insights: Navigating the Practical Use of AI in Compliance

In this episode of ‘From the Editor’s Desk,’ Tom Fox visits with Aaron Nicodemus to discuss highlights from the recent Compliance Week AI Conference. Key takeaways include the importance of understanding the purpose and practical use of AI tools before implementation, the pressures from C-suite and boards to adopt AI, and the necessity of a human-in-the-loop approach. The conversation also touches on integrating trust and integrity into AI adoption, the evolving role of compliance as a trusted partner in AI initiatives, and the collective willingness to learn and apply AI across compliance operations.

Key highlights:

  • Importance of Understanding AI Implementation
  • Pressure from the Top: Compliance and AI
  • Human Oversight in AI Processes
  • Trust and Integrity in AI
  • Compliance as a Competitive Advantage
  • Real-World Examples: Robinhood and DocuSign
  • The Evolving Role of Compliance in AI
  • Conference Vibes and Final Thoughts

Resources:

Aaron Nicodemus on LinkedIn

Compliance Week

Categories
Blog

Returning to Venezuela: Why “Yes, If” Is the Only Defensible Compliance Answer

Most of you readers know that sometimes when I get going on a project, it (the project, not me) just keeps on growing. What started as a podcast with Matt Ellis on the risks of going back into Venezuela expanded out into a series of podcasts on the FCPA Compliance Report and with Mike DeBernardis on All Things Investigations. The podcasts led to a five-part blog post series on the same topic in the FCPA Compliance and Ethics Blog. I then needed to expand the blogs into a book and provide forms, checklists, frameworks, and deployment packs for compliance professionals to help them think through the issues presented in Venezuela and in other similarly high-risk jurisdictions.

All of that has led to the only book on how to return to Venezuela, Returning to Venezuela: The Compliance Guide to Yes, If (Title inspired by Mike DeBernardis). It is available in both print and eBook versions on Amazon.com.

When companies talk about returning to Venezuela, the conversation almost always begins with opportunity. Oil reserves. Market access. First-mover advantage. What the book Returning to Venezuela does is effectively reset that conversation where it belongs for compliance professionals: with reality. It is a disciplined, compliance-first analysis of what it actually means to operate in one of the world’s highest-risk jurisdictions.

The core message is uncompromising but straightforward: Venezuela is not a place for optimism, informal controls, or siloed compliance. It is a stress test. If your compliance program can function there, it can function anywhere. If it cannot, no license, policy, or assurance letter will save you. The book is not a warning label about Venezuela. It is a working manual for how a compliance function should assess risk, design controls, and govern decision-making before commercial momentum takes over.

Step One: Reframing the Risk Assessment

The first way a compliance professional should use Returning to Venezuela is to recalibrate how risk assessments are performed. Traditional country risk assessments often ask abstract questions: corruption perception scores, sanctions status, and enforcement history. Those inputs are necessary, but insufficient. Returning to Venezuela pushes compliance professionals to replace abstract scoring with operational mapping.

Instead of asking whether Venezuela is high risk, the framework asks:

  • Where will government discretion arise?
  • Where can delay be monetized?
  • Where does the business depend on intermediaries?
  • Where does value move, pause, or change form?

This is a critical shift. Risk is no longer treated as a country attribute. It becomes a process attribute. Compliance professionals can use Returning to Venezuela’s structure to redesign their risk assessment around real business steps: procurement, logistics, payment, security, licensing, and dispute resolution.

Step Two: Identifying Pressure Points Before They Become Incidents

Returning to Venezuela is especially useful in helping compliance professionals identify pressure points, not just risk categories. Pressure points are moments where the business is most likely to face demands for improper value, shortcuts, or exceptions. Procurement is one. Customs clearance is another. Security access, utilities, labor approvals, and payment routing are others.

Using Returning to Venezuela, compliance professionals can document:

  • Where pressure is expected;
  • Who owns the decision at that point?
  • What escalation looks like; and
  • When refusal or exit becomes mandatory.

This transforms compliance from a reactive role into a proactive role in designing decision architecture.

Step Three: Using the Checklists as Control Gates, Not Paper Artifacts

A common compliance failure is treating red flags as documentation exercises rather than control mechanisms. One of the strengths of Returning to Venezuela is that its red flags are designed as gates, not records. Each checklist answers a single question: Is this activity governable under our current assumptions?

Compliance professionals can deploy these checklists at defined moments:

  • Market entry discussions
  • Vendor and JV selection
  • Transaction structuring
  • Payment and banking design
  • Security and logistics planning

If a red flag cannot be cleared, the activity cannot proceed. That discipline is what makes the framework defensible. It also protects compliance officers personally, because decisions are anchored in documented governance rather than informal judgment.

Step Four: Integrating Risk Domains Instead of Managing Them in Silos

Another way compliance professionals should use Returning to Venezuela is as a blueprint for breaking down internal silos. The book makes clear that in Venezuela, corruption, export controls, AML, sanctions, security, and extortion are not separate risks. They are interconnected expressions of the same operating pressure. Treating them separately guarantees blind spots.

Practically, this means compliance can use the book to justify:

  • Integrated risk reviews instead of sequential sign-offs;
  • Shared escalation forums across functions;
  • Unified monitoring rather than separate dashboards; and
  • Common exit triggers across risk domains.

This is particularly important for AML. Returning to Venezuela positions money laundering risk not as a standalone compliance obligation, but as the capstone test of whether the entire framework works.

Step Five: Structuring Board Oversight Around Decisions, Not Updates

Too often, boards receive high-level compliance updates that provide comfort but not clarity. Returning to Venezuela gives compliance professionals a way to reframe board oversight around decisions, not reports. Using the board materials and decision templates, compliance can:

  • Force explicit risk acceptance;
  • Document assumptions that underpin approvals;
  • Secure delegated authority to pause or exit operations; and
  • Establish clear revisit and escalation triggers.

This protects both the organization and the compliance function. When conditions change, the discussion is no longer “Why did this happen? ” but “Which assumption failed, and what decision does that trigger? ” That is governance functioning as intended.

Step Six: Building a Repeatable Risk Management Framework

The final and most important way to use Returning to Venezuela is as a template, not a one-off Venezuela playbook. While the facts are Venezuela-specific, the framework is portable. Compliance professionals can lift this framework and apply it to:

  • Other high-risk markets;
  • Post-merger integration;
  • Sanctions-heavy environments; and
  • Complex third-party ecosystems.

The Appendices: The Operational Backbone of Returning to Venezuela: Yes, If

One of the defining features of Returning to Venezuela: The Compliance Guide to Yes, If is that it does not stop at analysis. The appendices convert risk identification into governance, decision-making, and operational control. They are not academic supplements. They are the machinery that makes a “yes, if” decision possible in practice.

Taken together, the appendices form an integrated compliance control stack designed for one purpose: to govern decision-making in an environment where corruption, coercion, sanctions, AML exposure, and weak rule of law are not edge cases but daily conditions.

Appendix A: One-Page Operational Checklists

Appendix A contains a series of one-page checklists, each focused on a distinct but interconnected risk domain. These are not policy summaries. They are operational gating tools meant to be used before decisions are made, not after problems occur.

Appendix B: The CCO Deployment Pack

Appendix B is written from the perspective of the Chief Compliance Officer and is explicitly operational. It is designed to be deployed internally to executive leadership, business sponsors, and control functions.

Appendix C: Board of Directors Materials

Appendix C is aimed squarely at directors and audit or compliance committees. Its function is not to educate boards on Venezuela generally but to structure how boards make, record, and revisit risk acceptance decisions.

Appendix D: Decision-Making Frameworks

Appendix D pulls together the logic underlying the entire book. It provides decision-making frameworks that force organizations to confront uncomfortable realities before committing resources.

How the Appendices Work Together

Individually, each appendix addresses a specific audience or function. Collectively, they form an integrated control system that aligns:

  • Operational decision-making.
  • Compliance authority.
  • Board oversight.
  • Exit discipline.

The appendices are designed to prevent the most common failure pattern in high-risk jurisdictions: waiting until conditions deteriorate before asking hard questions. By then, leverage is gone.

Final Thought

The most important contribution of Returning to Venezuela is that it does not accurately describe risk. It shows compliance professionals how to operate in the real world without surrendering control.

Used correctly, the book becomes a working tool:

  • To assess risk honestly;
  • To design controls that hold under pressure;
  • To align management and the board, and finally
  • To decide when “yes” becomes “no.”

For compliance professionals, that is not just risk management. It is about meeting the business in an operational setting with a risk management strategy for literally the highest risk on earth.

You can purchase Returning to Venezuela: The Compliance Guide to Yes, if on Amazon.com.

Categories
Blog

How Compliance Should Show Up Before the Crisis

Recently, my colleague Matt Kelly wrote a blog post about retaliation against Chief Compliance Officers (CCOs). Matt and I explored it in an episode of the podcast Compliance into the Weeds. Matt’s post and our discussion crystallized one of the frustrations of the CCO role: compliance is often experienced solely by senior management as a late-arriving messenger of bad news. When compliance walks into the room, something has already gone wrong. The tone changes. Defenses go up. Trust narrows.

Yet the most consequential moments for a CCO are precisely those situations where the stakes are highest. A potential regulatory disclosure. A decision about whether to notify a government agency. A moment where delay, missteps, or poor coordination can turn a manageable issue into an enterprise-level crisis. If compliance is only visible in those moments, the relationship with the CEO and executive leadership team is already at a disadvantage.

Interestingly, in our podcast, we explored a technique which might be termed “coaching management ahead of time”. Matt picked up the strategy of using a training borrowed from the cyber world of incident training for a cyber-attack. I see this as a very powerful way not only to communicate compliance but also to train on the specific issues senior management will face if a reportable compliance incident occurs. You could train on such hypotheticals by walking the executive leadership team through them so they understand the process, while also providing training on the specific issues.

I think this approach offers practical, repeatable ways to build trust with senior management before a crisis, so that when compliance raises a serious issue, the function is seen as a stabilizing force, not a source of panic.

The Core Problem: Compliance as the Bearer of Bad News

Many compliance officers do excellent technical work but still struggle to earn executive trust. The reason is not competence. It is timing and framing. Senior leaders often experience compliance in three narrow contexts:

  • An investigation has begun.
  • A whistleblower allegation has escalated; and/or
  • A regulator may need to be notified.

In those moments, compliance is necessarily directive. The CCO must slow decisions down, insist on process, and sometimes recommend outcomes executives would prefer to avoid. Without a foundation of trust, those recommendations can feel punitive or overly conservative. The solution is not softer messaging during crises. The solution is familiarity with the compliance process long before the crisis arrives.

Process Transparency as a Trust-Building Strategy

Trust is built through predictability. Senior executives are far more comfortable with difficult outcomes when they understand the process that leads there. This is where scenario-based training becomes one of the most underused tools in the compliance arsenal. Instead of waiting for a live issue, the CCO can walk the executive leadership team through realistic hypotheticals:

  • A fact pattern that suggests regulatory notification may be required
  • How compliance evaluates credibility and materiality
  • Who is involved at each stage and why
  • What decisions will management be asked to make
  • What actions help, and what actions make things worse

These sessions are not about assigning blame or rehearsing fear. They are about demystifying how compliance operates when the stakes are high.

Why Scenario-Based Training Works With Executives

Scenario-based discussions resonate with executive teams for several reasons. First, they are practical. Executives do not need another policy overview. They want to know what actually happens when something goes wrong. Second, they are respectful of executive time and intelligence. A well-designed hypothetical treats leadership as decision-makers, not students. Third, they normalize compliance involvement.

When executives have already walked through a compliance-led process in a low-pressure setting, that process feels familiar rather than threatening during a real event. Most importantly, scenario-based training reframes compliance from a reactive function to a preparedness function.

The Strategic Role of Informal Engagement

These conversations do not need to occur only in formal training sessions. In fact, some of the most effective trust-building happens outside structured settings.

  • A short walkthrough during an executive offsite.
  • A tabletop discussion over lunch.
  • A casual conversation that begins with, “Let me show you how we would handle this if it ever happened.”

These informal touchpoints matter because they remove fear from the equation. They allow executives to ask questions they might not ask during a live issue. They also allow compliance to show judgment, nuance, and business awareness. This is not a charm offensive. It is a deliberate relationship strategy.

Training on What Not to Do

One of the most valuable elements of scenario-based transparency is the ability to explain mistakes before they occur. Executives often want to help in a crisis. That instinct, while well-intentioned, can create problems. Premature document reviews. Side conversations. Incomplete recollections. Overconfident assurances.

Scenario training allows the CCO to say, in advance, “Here is what helps us protect the company,” and just as importantly, “Here is what can unintentionally make things worse.” When executives understand these boundaries ahead of time, compliance interventions during a real issue feel protective rather than restrictive.

From Messenger of Doom to Stabilizing Force

When compliance has invested in transparency and education, something important shifts. When the CCO later says, “We believe this may require regulatory notification,” that recommendation is no longer heard in isolation. It is understood as part of a known, previously discussed process.

Executives may not like the conclusion, but they trust the path that led there. That trust allows compliance to do its job effectively. It reduces friction. It shortens response time. It improves decision quality. Most importantly, it positions compliance as an advisor whose presence brings structure and clarity to uncertainty.

What Compliance Officers Should Take Away

For compliance officers, the lesson is not about presentation skills or tone management. It is about timing and familiarity. If senior management only experiences compliance during moments of stress, compliance will always feel adversarial. If senior management understands the compliance process before the stress arrives, compliance becomes a stabilizing influence.

Scenario-based training, informal engagement, and process transparency are not “nice to have” activities. They are strategic tools for relationship-building at the highest levels of the organization. The most trusted CCOs are not those who avoid bringing bad news. They are the ones who ensure that when bad news arrives, it is delivered within a framework everyone already understands. That is how compliance earns trust before the crisis and credibility during it.

Categories
31 Days to More Effective Compliance Programs

31 Days to a More Effective Compliance Program: Day 26 – Elevating the Role and Independence of the Chief Compliance Officer

Welcome to 31 Days to a More Effective Compliance Program. Over this 31-day series in January 2026, Tom Fox will post a key component of a best-practice compliance program each day. By the end of January, you will have enough information to create, design, or enhance a compliance program. Each podcast will be short, at 6-8 minutes, with three key takeaways that you can implement at little or no cost to help update your compliance program. I hope you will join each day in January for this exploration of best practices in compliance. In today’s Day 26 episode, we ponder the evolving stature and authority of the CCO within organizations, as highlighted by recent guidelines and regulations.

Key highlights:

  • Key Inquiries Around the CCO and Compliance Function
  • Importance of CCO Certification and Court Decisions
  • Critical Takeaways for Compliance Professionals

Resources:

Listeners to this podcast can receive a 20% discount on The Compliance Handbook, 6th edition, by clicking here.

Categories
Compliance Into the Weeds

Compliance into the Weeds: Addressing Retaliation Against Compliance Officers: Strategies and Insights

The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to explore it more fully. Looking for some hard-hitting insights on compliance? Look no further than Compliance into the Weeds! In this episode of Compliance into the Weeds, Tom Fox and Matt Kelly look at the challenges of retaliation against Chief Compliance Officers (CCOs).

They highlight the need for ongoing communication between compliance officers and senior management and share strategies for CCOs to mitigate personal risk. The discussion includes real-world examples, the role of senior management in fostering a compliant culture, and the importance of scenario planning and training to prepare for potential issues. The episode emphasizes proactive measures such as charm offensives and preemptive remediation plans to navigate and defuse potential retaliatory scenarios.

Key highlights:

  • Real-Life Examples of Retaliation
  • Management’s Perception and Compliance Challenges
  • Building Relationships with Senior Management
  • Proactive Compliance Strategies to Prevent Retaliation
  • Framing Compliance Training Like Cybersecurity Drills

Resources:

Matt in Radical Compliance

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

A multi-award-winning podcast, Compliance into the Weeds was most recently honored as one of the Top 25 Regulatory Compliance Podcasts, a Top 10 Business Law Podcast, and a Top 12 Risk Management Podcast. Compliance into the Weeds has been conferred a Davey, a Communicator Award, and a W3 Award, all for podcast excellence.