Categories
Blog

AI Compliance as a Competitive Advantage: Turning Governance Into ROI

In too many organizations, “AI compliance” is treated like a speed bump. Something to route around, manage after launch, or outsource to a vendor deck and a policy that nobody reads. That mindset is not only outdated but also expensive. In 2026, mature AI governance is becoming a commercial differentiator because customers, regulators, employees, and business partners increasingly ask the same question: Can you prove your system is trustworthy?

The most underappreciated truth is that AI risk is not “an AI team problem.” It is a business-process problem, expressed through data, decisions, third parties, and change control. The Department of Justice Evaluation of Corporate Compliance Programs (ECCP) has never been about perfect paperwork; it has been about whether a program is designed, implemented, resourced, tested, and improved. If you can translate that posture into AI, you can convert “compliance cost” into “credibility capital.”

A cautionary backdrop shows why. The EEOC’s 2023 settlement with iTutorGroup serves as a cautionary tale: automated hiring screening that disadvantages older workers can lead to legal exposure, remediation costs, and reputational damage. The details matter less than the pattern; when algorithmic decisions are not governed, the business eventually pays the bill. The compliance professional should see the pivot clearly; governance is the mechanism that lets you move fast without becoming reckless.

From a build-from-scratch, low-to-medium maturity posture, the win is not sophistication. The win is repeatability. If you build an AI governance framework aligned to NIST AI RMF (govern, map, measure, manage), structured through ISO/IEC 42001’s management-system discipline, and cognizant of EU AI Act risk tiering, you get something the business loves: a predictable path from idea to deployment. Today, I will explore five ways mature AI compliance can become a competitive advantage, each with a practical view of how a compliance-focused GenAI assistant can support business processes.

1) Sales and Customer Trust

Trust is a sales feature now, even when marketing refuses to call it that. Customers increasingly ask about data use, model behavior, security controls, and human oversight, and they are doing it in procurement questionnaires and contract negotiations. A mature governance framework lets you answer quickly, consistently, and with evidence, thereby shortening sales cycles and reducing late-stage deal friction. A compliance GenAI can support this by drafting standardized responses from approved trust artifacts such as policies, model cards, DPIAs, and audit summaries; flagging gaps, and routing exceptions to Legal and Compliance before the business overpromises.

For compliance professionals, this lesson is even more stark, as the ‘customers’ of a corporate compliance program are your employees. Some key KPIs you can track are average time to complete AI security and compliance questionnaires; percentage of deals requiring AI-related contractual concessions; number of customer-facing AI disclosures issued with approved templates; and percentage of AI systems with current model documentation and ownership attestations.

2) Regulatory Credibility

Regulators are not impressed by ambition; controls persuade them. NIST AI RMF provides a common language to demonstrate that you mapped use cases, measured risks, and managed them over time, while ISO/IEC 42001 imposes discipline on accountability, documentation, and continual improvement. The EU AI Act’s risk-based approach adds an organizing principle: classify systems, apply controls proportionate to risk, and prove that you did it. A compliance GenAI can help by maintaining a living inventory, prompting owners to complete quarterly attestations, drafting control narratives aligned with the frameworks, and assembling regulator-ready “evidence packs” that demonstrate governance in operation rather than on paper.

For compliance professionals, this lesson is about your gap analysis. You have not aligned your current internal controls with GenAI, governance, or other controls. You should do so. Some key KPIs you can track are percentage of AI systems risk-tiered and documented; time to produce an evidence pack for a high-impact system; number of material control exceptions and time-to-remediation; and frequency of risk reviews for high-impact systems.

3) Faster Product Approvals and Safer Deployment

Speed comes from clarity, not from cutting corners. When decision rights, review thresholds, and required artifacts are defined up front, product teams stop guessing what Compliance will require at the end. That is the management-system advantage: ISO/IEC 42001 treats AI governance like a repeatable operational process with gates, owners, and records, rather than a series of one-off debates. A compliance GenAI can support the workflow by pre-screening new use-case intake forms, recommending the correct risk tier under EU AI Act concepts, suggesting required testing (bias, privacy, safety), and generating the first draft of a launch checklist that the product team can execute.

For compliance professionals, this lesson is that you must run compliance at the speed of your business operations. Some key KPIs you can track are: cycle time from AI intake to approval; percent of launches that pass on first review; number of post-launch “surprise” issues tied to missing pre-launch controls; and percentage of models with human-in-the-loop controls when required.

4) Talent, Recruiting, and Internal Confidence

Top performers do not want to work in a company that treats AI like a toy and compliance like a nuisance. Mature governance creates psychological safety inside the organization: employees know what is permitted, what is prohibited, and how to raise concerns. It also improves recruiting because candidates, especially in technical roles, ask about responsible AI practices, data governance, and ethical guardrails. A compliance GenAI can support internal confidence by serving as the first-line “policy concierge,” answering questions with approved guidance, directing employees to the correct procedures, and logging common questions so Compliance can improve training and communications.

For compliance professionals, this fits squarely within the DOJ mandate for compliance to lead efforts in institutional justice and fairness. Some key KPIs you can track include training completion and comprehension metrics for AI use; the number of AI-related helpline inquiries and their resolution times; employee survey results on comfort raising AI concerns; and the percentage of AI use cases with documented business-owner accountability.

5) Lower Cost of Incidents and More Resilient Operations

AI incidents are rarely just “bad outputs.” They are process failures: poor data lineage, uncontrolled model changes, vendor opacity, missing logs, weak access controls, or no escalation path when harm appears. NIST AI RMF’s “measure” and “manage” functions emphasize monitoring, drift detection, incident response, and continuous improvement, which is precisely how you reduce the frequency and severity of failures. A compliance GenAI can support incident resilience by guiding teams through an AI incident response playbook, helping triage severity, ensuring evidence is preserved (audit logs, prompts, outputs, approvals), and generating lessons-learned reports that connect root cause to control enhancements.

For compliance professionals, this lesson is even more stark, as the ‘customers’ of a corporate compliance program are your employees. Some key KPIs you can track include the number of AI incidents by severity tier; mean time to detect and mean time to remediate; the percentage of high-impact models with drift-monitoring and alert thresholds; and the percentage of third-party AI providers subject to change-control notification requirements.

What “Mature Governance” Looks Like When You Are Building From Scratch

Do not start with a 60-page policy. Start with a few non-negotiables that scale:

  • Inventory and classification: Create a single inventory of GenAI assistants, ML models, and automated decision systems. Classify them by impact using EU AI Act concepts (high-impact versus low-impact) and your own business context.
  • Accountability and decision rights: Assign an owner for each system and require periodic attestations for the highest-risk categories.
  • Standard artifacts: Use lightweight model documentation, data lineage notes, and disclosure templates. If it is not documented, it does not exist for governance.
  • Human oversight and logging: Define when human-in-the-loop is mandatory and ensure logs capture who approved what, when, and why.
  • Third-party AI controls: Contract for transparency, audit support, change notification, and security requirements. Vendor opacity is not a strategy.

This is where ECCP thinking helps. The question is not whether you have a policy. The question is whether the policy is operationalized, tested, and improved. That is the bridge from compliance to competitive advantage.

If you want AI compliance to be a competitive advantage, treat it like a management system that produces evidence, not like a policy library that produces comfort. When governance becomes repeatable, the business can move faster, regulators become more confident, and customers see the difference. That is not a cost center. That is credibility you can take to the bank.

Categories
Blog

5 Strategic Board Playbooks for AI Risk (and a Bootcamp)

Artificial intelligence is no longer a future-state technology risk. It is a current-state governance issue. If AI is being deployed inside governance, risk, and compliance functions, then it is already shaping how your company detects misconduct, prioritizes investigations, manages regulatory obligations, and measures program effectiveness. That makes AI risk a board agenda item, not a management footnote.

In an innovation-forward organization, the goal is not to slow AI adoption. The goal is to professionalize it. Board of Directors and Chief Compliance Officers (CCOs) should approach AI the way they approached cybersecurity a decade ago: move it from “interesting updates” to a structured reporting cadence with measurable controls, clear accountability, and director education that raises the collective literacy of the room.

Today, we consider 5 strategic playbooks designed for a Board of Directors and a CCO operating in an industry-agnostic environment, building AI in-house, without a model registry yet, and with a cross-functional AI governance committee chaired and owned by Compliance. The program must also work across multiple regulatory regimes, including the DOJ Evaluation of Corporate Compliance Programs (ECCP), the EU AI Act, and a growing patchwork of state laws. We end with a proposal for a Board of Directors Boot Camp on their responsibilities to oversee AI in their organization.

Playbook 1: Put AI Risk on the Calendar, Not on the Wish List

If AI risk is always “important,” it becomes perpetually postponed. The first play is procedural: create a standing quarterly agenda item with a consistent structure.

Quarterly board agenda structure (20–30 minutes):

  1. What changed since last quarter? Items such as new use cases, material model changes, new regulations, and major control exceptions.
  2. AI full Risk Dashboard, with 8–10 board KPIs, trends, and thresholds.
  3. Top risks and mitigations, including three headline risks with actions, owners, and dates.
  4. Assurance and testing, which would include internal audit coverage, red-teaming results, and remediation progress.
  5. Decisions required include policy approvals, risk appetite adjustments, and resourcing.

This cadence does two things. First, it forces repeatability. Second, it creates institutional memory. Boards govern better when they can compare quarter-over-quarter progress, not when they receive one-off deep dives that cannot be benchmarked.

Playbook 2: Build the AI Governance Operating Model Around Compliance Ownership

In your design, Compliance owns AI governance and its use throughout the organization, supported by a cross-functional AI governance committee. That is a strong model, but only if it is explicit about responsibilities.

Three lines of accountability:

  • Compliance (Owner): policy, risk framework, controls, training, and board reporting.
  • AI Governance Committee (Integrator): cross-functional prioritization, approvals, escalation, and issue resolution.
  • Build Teams (Operators): documentation, testing, change control, and implementation evidence.

Boards should ask one simple question each quarter: Who is accountable for AI governance, and how do we know it is working? If the answer is “everyone,” then the real answer is “no one.” Your model makes the answer clear: Compliance owns it, and the committee operationalizes it.

Playbook 3: Create the AI Registry Before You Argue About Controls

You have no model registry yet. That is the first operational gap to close, because you cannot govern what you cannot inventory. In a GRC context, this is not a “nice to have.” Without an inventory, you cannot prove coverage, you cannot scope an audit, you cannot define reporting, and you cannot explain to regulators how you know where AI is influencing decisions.

Minimum viable AI registry fields (start simple):

  • Use case name and business owner;
  • Purpose and decision impact (advisory vs. automated);
  • Data sources and data sensitivity classification;
  • Model type and version, with change log;
  • Key risks (bias, privacy, explainability, security, reliability);
  • Controls mapped to the risk (testing, monitoring, approvals);
  • Deployment status (pilot, production, retired); and
  • Incident history and open issues.

Boards do not need the registry details. They need the coverage metric and the assurance that the registry is complete enough to support governance.

Playbook 4: Align to the ECCP, EU AI Act, and State Laws Without Creating a Paper Program

Many organizations make a predictable mistake: they respond to multiple frameworks by producing multiple binders. That creates activity, not effectiveness. A better approach is to use a single control architecture to map to multiple requirements. The board should see one integrated story:

  • DOJ ECCP lens: effectiveness, testing, continuous improvement, accountability, and resourcing;
  • EU AI Act lens: risk classification, transparency, human oversight, quality management, and post-market monitoring; and
  • State law lens: privacy, consumer protection concepts, discrimination prohibitions, and notice requirements where applicable

This mapping becomes powerful when it ties back to the board dashboard. The board is not there to read statutes. The board is there to govern outcomes.

Playbook 5: Use a Board Dashboard That Measures Coverage, Control Health, and Outcomes

You asked for a combined dashboard and narrative with 8–10 KPIs. Here is a board-level set designed for AI in governance, risk, and compliance functions, with in-house build, internal audit, and red teaming for assurance.

Board AI Governance KPIs (8–10)

1. AI Inventory Coverage Rate

Percentage of AI use cases captured in the registry versus estimated footprint.

2. Risk Classification Completion Rate

Percentage of registered use cases risk-classified (EU AI Act style tiers or internal tiers).

3. Pre-Deployment Review Pass Rate

Percentage of deployments that cleared required testing and approvals on first submission.

4. Model Change Control Compliance

Percentage of model changes executed with documented approvals, testing evidence, and rollback plans.

5. Explainability and Documentation Score

Percentage of in-scope use cases with complete documentation, rationale, and user guidance.

6. Monitoring Coverage

Percentage of production use cases with active monitoring for drift, anomalies, and performance degradation.

7. Issue Closure Velocity

Median days to close AI governance issues, by severity.

8. Internal Audit Coverage and Findings Trend

Number of audits completed, rating distribution, repeat findings, and remediation status.

9. Red Team Findings and Remediation Rate

Number of material vulnerabilities identified and percentage remediated within the target time.

10. Escalations and Incident Rate

Number of AI-related incidents or escalations (including near-misses), with severity and lessons learned.

These KPIs do not require vendor controls and align with an in-house build model. They also support both board oversight and compliance management.

AI Director Boot Camp

Your board has a medium level of literacy and needs a boot camp. I agree. Directors do not need to become engineers. They need a common vocabulary and a governance frame. The recommended boot camp design is one-half day, making it highly practical. It should include the following.

  1. AI in the company’s operating model. This means where it touches decisions, risk, and compliance outcomes.
  2. AI risk taxonomy, such as bias, privacy, security, explainability, reliability, third-party, and later.
  3. Regulatory landscape overview, including a variety of laws and regulatory approaches, including the DOJ ECCP approach to effectiveness, the EU AI Act risk framing, and several state law themes approaches.
  4. Governance model walkthrough to ensure the BOD understands the registry, risk classification, controls, monitoring, and escalation.
  5. Tabletop exercises, such as an AI incident in a GRC context with false negatives in monitoring or biased triage.
  6. Board oversight duties. Teach the BOD how they can meet their obligations, including which questions to ask quarterly, which thresholds trigger escalation, and similar insights.

The deliverable from the boot camp should be a one-page “Director AI Oversight Guide” with the KPIs, escalation triggers, and the quarterly agenda structure.

The Bottom Line for Boards and CCOs

This is the moment to treat AI risk like a board-governed discipline. The organizations that get it right will not be the ones with the longest AI policy. They will be the ones with the clearest operating model, the most reliable reporting cadence, and the strongest evidence of control effectiveness.

If Compliance owns AI governance, then Compliance must also own the proof. That proof is delivered through a registry, a quarterly board agenda item, a balanced KPI dashboard, and assurance through internal audit and red teaming. Add a director boot camp to create shared understanding, and you have the beginnings of a program that is innovation-forward and regulator-ready.

That is the strategic playbook: not fear, not hype, but governance.

Categories
Blog

When Your AI Chat Becomes Exhibit A: What United States v. Heppner Means for Compliance Professionals

There are court rulings that quietly shape doctrine, and others that detonate assumptions. The recent decision of Judge Jed Rakoff from the Southern District of New York in United States v. Heppner falls into the latter category. In a February 10, 2026, ruling,  the Court made clear that the attorney-client privilege or the work-product doctrine did not protect materials generated through a third-party generative AI platform. In plain English, what a defendant typed into a public AI system was discoverable.

For compliance professionals, this is not a narrow litigation footnote. It is a flashing red warning light. The era of casual AI experimentation inside corporations is over. Governance now must catch up with adoption. Today, we will consider the Court’s ruling and why it matters to a Chief Compliance Officer.

The Court’s Core Holding

The defendant in Heppner had used a third-party generative AI tool to draft and refine materials that were later shared with counsel. When prosecutors sought production, the defense argued that these materials were protected by privilege and work-product protections. The court disagreed.

The reasoning was straightforward and, frankly, predictable:

  • The AI tool was not an attorney.
  • The terms of service did not guarantee confidentiality and allowed retention or potential disclosure of inputs.
  • The materials were not prepared at the direction of counsel for the purpose of obtaining legal advice.
  • Simply sending AI-generated drafts to counsel after the fact did not, by itself, retroactively cloak them in privilege.

This is a fundamental point: privilege attaches to communications made in confidence for the purpose of seeking legal advice. When an employee enters sensitive facts into a third-party AI platform that disclaims confidentiality, that “confidence” is at best questionable. When those drafts are created independently of counsel’s direction, work-product arguments grow thin. The court did not create a new doctrine. It applied existing principles to new technology. That is precisely why this ruling is so important.

The Illusion of Confidentiality

Many business users treat AI platforms like a digital notebook. They assume that because the interaction occurs on a screen and feels private, it is private. That assumption is dangerous. Public and consumer AI platforms often reserve the right to store, analyze, or use inputs for service improvement. Even when vendors promise limited retention, those commitments may not meet the strict confidentiality standards necessary to preserve privilege. From a legal perspective, once you introduce a third party without adequate confidentiality protections, you risk waiving your rights.

The compliance lesson is blunt: generative AI is not your lawyer, and it is not your secure internal memo system. This is where governance intersects with culture. If employees are entering investigative summaries, draft responses to regulators, internal audit findings, or potential misconduct narratives into public AI tools, you are manufacturing discoverable evidence. That is not a hypothetical risk. That is now a litigated reality.

Why This Is a Board-Level Issue

The Department of Justice has made clear through the Evaluation of Corporate Compliance Programs (ECCP) that companies must identify and manage emerging risks. Artificial intelligence is no longer emerging. It is embedded in operations, marketing, finance, and legal workflows. The Heppner ruling converts AI usage from a technology convenience into a legal risk category. Boards of Directors should be asking:

  • Do we have an inventory of AI tools used across the enterprise?
  • Are employees permitted to input confidential, regulated, or legally sensitive information into third-party platforms?
  • Have we reviewed the vendor’s terms of service regarding confidentiality, retention, and data ownership?
  • Are legal and compliance functions involved in approving AI deployments?

If the answer to any of these questions is uncertain, there is a governance gap. AI governance is no longer solely about bias, explainability, or regulatory compliance. It is also about preserving privilege, managing litigation risk, and managing evidence.

Privilege cannot Be Recreated After the fact.

One of the most significant aspects of the ruling is the rejection of “retroactive privilege.” Sending AI-generated content to counsel after it is created does not transform it into protected communication. This matters for compliance investigations. Consider the following scenario:

An internal report of potential misconduct surfaces. An employee uses a public AI tool to summarize the facts and generate possible legal arguments before reaching out to in-house counsel. That summary now exists outside any protected legal channel. The vendor may retain it. It may be discoverable.

By the time counsel becomes involved, the privilege damage may already be done. The message for compliance teams is clear: legal engagement must precede, or at least direct, sensitive analysis, not follow it.

Work Product Is Not a Safety Net

Some may argue that AI-assisted drafting in anticipation of litigation should fall under the work-product doctrine. The court in Heppner was not persuaded. Work-product protection generally applies to materials prepared by or for an attorney in anticipation of litigation. When individuals independently generate content using AI tools without counsel’s direction, that protection is far from guaranteed. Compliance professionals should not assume that labeling a document “prepared in anticipation of litigation” will insulate AI-generated material. Courts will look at substance over form.

Practical Steps for Compliance Leaders

This ruling demands operational response from every CCO. Here are some steps every compliance program should consider.

1. Treat Third-Party AI as Non-Confidential by Default

Unless you have a contractual, enterprise-level arrangement with robust confidentiality provisions and clear data controls, assume that information entered into a third-party AI platform is not protected. This default posture should be reflected in policy language.

2. Update Acceptable Use Policies

Your code of conduct and IT policies should explicitly address the use of generative AI. Prohibit the entry of:

  • Privileged communications.
  • Investigation details.
  • Personally identifiable information.
  • Trade secrets.
  • Sensitive regulatory communications.

Policy must move from general warnings to specific examples.

3. Involve Legal in AI Governance

AI procurement should not be a purely IT function. Legal and compliance must review vendor terms, especially around:

  • Data retention.
  • Subprocessor use.
  • Confidentiality obligations.
  • Audit rights.
  • Breach notification.

If you cannot articulate how your AI vendor protects inputs, you cannot defend privilege claims.

4. Implement Training That Reflects Real Risk

Annual compliance training should now include explicit guidance on AI usage. Employees should understand that entering confidential information into public AI tools can waive privilege and render it discoverable. Training should include practical scenarios. The objective is behavioral change, not abstract awareness.

5. Establish Secure AI Environments for Legal Work

If your organization intends to use AI in legal or investigative contexts, consider enterprise solutions that:

  • Operate within your controlled environment.
  • Restrict data sharing.
  • Provide contractual confidentiality.
  • Maintain clear audit logs.

Even then, legal oversight is essential. Secure does not automatically mean privileged.h

6. Align with Litigation Hold Procedures

AI interaction logs may constitute discoverable material. Ensure that your litigation hold processes account for AI-generated content. If your organization logs prompts and outputs, those logs may fall within the scope of preservation obligations. Ignoring this dimension creates spoliation risk.

The Cultural Dimension

Technology adoption inside companies often outruns governance. Employees experiment. Business units optimize. Productivity improves. Compliance arrives later. That sequencing is no longer sustainable. The Heppner ruling should catalyze a shift from reactive to proactive governance. AI usage must be mapped, risk-ranked, and monitored, just as third-party intermediaries, high-risk markets, and financial controls are. If your risk assessment does not explicitly include generative AI, it is incomplete.

Connecting to the DOJ’s Expectations

The DOJ has repeatedly emphasized dynamic risk assessment. Artificial intelligence now clearly falls within the scope of corporate compliance evaluation. Prosecutors will not be sympathetic to arguments that “everyone was using it” or that policies were silent. They will ask:

  • Did the company identify AI as a risk area?
  • Did it implement controls?
  • Did it train employees?
  • Did it monitor usage?
  • Did it respond to incidents?

The answers to those questions will influence charging decisions, resolutions, and penalty calculations.

A Final Word: Convenience Versus Control

Generative AI is transformative. It enhances drafting, analysis, and research. It can elevate compliance operations if deployed thoughtfully. However, convenience without control is exposure. The lesson of United States v. Heppner is not that AI should be avoided. It is that AI must be governed with the same rigor as any other high-impact enterprise tool.

Privilege is fragile. Once waived, it cannot be restored. In a world where a chat prompt can become an exhibit, compliance professionals must lead the charge in redefining responsible AI use. If you are a chief compliance officer, this is your moment. Update your policies. Engage your board. Coordinate with legal and IT. Embed AI governance into your compliance framework. Because the next time an AI conversation surfaces in discovery, you do not want to explain why your program treated it like a harmless experiment.

Categories
FCPA Compliance Report

FCPA Compliance Report – Navigating Compliance in 2026: Trends and Transformations

Welcome to the award-winning FCPA Compliance Report, the longest-running podcast in compliance. In this episode, we replay a recent webinar Tom Fox participated in, hosted by EQS. The panel moderator was Steph Holmes, and the panelists were Tom Fox, Mary Shirley, and Matt Kelly.

The session focuses on six key 2026 trends for ethics and compliance programs:

(1) AI moving from experimentation to operational use, emphasizing deliberate scaling, human-in-the-loop oversight, governance frameworks, monitoring, and managing “shadow AI,” with practical use cases such as policy chatbots, gift/travel/entertainment reviews, and AI-enabled third-party risk lifecycle management;

(2) enforcement “volatility” and unpredictable regulatory signals, with emphasis on returning to fundamentals such as documenting program inputs and outcomes, and noting continued activity, including record FCA resolutions and a DOJ whistleblower program award leading to a rapid antitrust settlement;

(3) shifting employer–employee dynamics, including Gartner survey findings that 40% of employees would intentionally miss a compliance requirement to harm their organization, discussion of trust, employee sentiment, multi-generational communication differences, and the need to partner with HR while staying within organizational lanes;

(4) heightened third-party and supply chain risk expectations, including cybersecurity, tariffs/tariff evasion, export controls, and the need to unify siloed risk views into a holistic third-party risk assessment;

(5) anticipated increases in whistleblowing and investigation demands amid volatility, highlighting the importance of preventing retaliation, keeping reporters feeling heard through responsive communications, triage protocols, and anonymized case examples to build trust; and

(6) measuring program effectiveness through a shift from outputs to outcomes, including reviewing KPIs and key risk indicators, peer review of investigations, hotline “mystery shopping,” and gap analyses against the DOJ’s ECCP and compliance program hallmarks, with special emphasis on third-party documentation and ongoing monitoring.

Resources:

Mary Shirley on LinkedIn

Steph Holmes on LinkedIn

Matt Kelly at Radical Compliance

EQS

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Returning to Venezuela on Amazon.com

Categories
Blog

From Enforcement-Driven to Purpose-Driven Compliance

For more than two decades, corporate compliance programs have been built around one central organizing principle: enforcement. Where regulators go, compliance resources follow. When the Department of Justice prioritizes anticorruption, companies invest in FCPA controls. When regulators turn to privacy, cybersecurity, or AML, compliance budgets pivot accordingly. This enforcement-driven approach has shaped the modern compliance profession.

Yet, as Veronica Root Martinez persuasively argues in her recent working paper, Purpose-Driven Compliance, this dominant model may be fundamentally flawed, certainly in the era of Trump.  Despite unprecedented investments in compliance infrastructure, corporate misconduct persists. Repeat offenders remain common. Penalties grew larger, but behavior did not meaningfully improve. For compliance professionals, this raises an uncomfortable question: are we optimizing for the wrong objective?

Martinez’s answer is both challenging and clarifying. Compliance programs should not be primarily designed to satisfy enforcement authorities or to maximize mitigation credit after failure. Instead, they should be anchored in the organization’s own purpose, business risks, and ethical standards. In short, it is time to move from enforcement-driven compliance to purpose-driven compliance.

The Limits of Enforcement-Driven Compliance

The enforcement-driven model rests on two assumptions. First, that enforcement priorities reflect a company’s most significant risks. Second, that imperfect compliance is inevitable and acceptable so long as the organization can demonstrate good-faith efforts. Martinez brings both under scrutiny.

Regulatory priorities often lag behind real business risks. Enforcement agencies focus on certain categories of misconduct because they are visible, politically salient, or historically entrenched. But the risks that most threaten an organization’s mission may lie elsewhere. Martinez highlights how firms can become over-invested in compliance areas that attract enforcement attention while under-investing in mission-critical risks to their operations.

The second assumption, that some level of misconduct is acceptable, is even more troubling. Behavioral ethics research suggests that tolerating small violations creates conditions for larger ones. When leaders frame misconduct as statistically insignificant or “within expectations,” they risk normalizing behavior that undermines culture, trust, and ultimately performance. Wells Fargo’s infamous “1% problem” illustrates this danger. Senior leadership took comfort in the idea that only a small fraction of employees were engaging in misconduct, failing to appreciate that those numbers reflected only the misconduct that had been detected.

An enforcement-driven mindset encourages this type of thinking. If the organization is sanctioned, then low detection rates look like success. But if the question is whether the organization is living up to its own purpose and values, the same data tell a very different story. This is not the broken windows theory of enforcement, but something else.

The Cost of Treating Compliance as a Cost of Doing Business

Another weakness of enforcement-driven compliance is that it can turn sanctions into a predictable line item. As firms grow larger and penalties are discounted through cooperation credit, fines risk being internalized as a cost of doing business. Empirical work cited by Martinez suggests that large, repeat offenders often pay penalties that are small relative to their assets and revenues. In that environment, enforcement loses much of its deterrent effect.

For compliance professionals, this dynamic creates a structural tension. Programs may be technically “effective” under DOJ guidance while still failing to prevent misconduct that harms customers, employees, and communities. The distinction between standards of review and standards of conduct becomes critical. Meeting the government’s expectations for leniency is not the same as meeting the organization’s ethical obligations to itself and its stakeholders.

What Is Purpose-Driven Compliance?

Purpose-driven compliance begins with a simple but powerful shift in perspective. Instead of asking, “What does the regulator expect?” the organization asks, “What risks threaten our ability to achieve our purpose and what standards of conduct are required to address them?” Martinez defines purpose-driven compliance as programs directed by three elements: the firm’s purpose, the inherent risks associated with pursuing that purpose, and the ethical standards the organization sets for itself. This approach does not reject enforcement frameworks; rather, it treats them as a floor, not a ceiling.

In practical terms, purpose-driven compliance requires leadership to articulate why the organization exists and how misconduct undermines that mission. For a bank, this may mean focusing on customer trust and market integrity. For a pharmaceutical company, it may mean prioritizing patient safety and scientific integrity. For a university, it may mean safeguarding academic freedom and institutional trust. For a summer camp, it means protecting the campers from flash floods and other storms.

Once the purpose is clearly defined, compliance risk assessments become more meaningful. Risks are evaluated not only by enforcement exposure but by their potential to compromise the organization’s core objectives. This reframing helps compliance leaders resist the temptation to chase regulatory trends at the expense of mission-critical risks.

Moving Beyond Mitigation to Aspirational Standards

A key insight in Martinez’s work is that firms often confuse mitigation with excellence. Compliance programs are designed to minimize penalties rather than to maximize ethical performance. Purpose-driven compliance challenges that mindset by encouraging organizations to adopt high, ethical, and aspirational standards of conduct.

This does not mean pursuing perfection through draconian controls or internal criminalization. Martinez rightly warns against overdeterrence and strict liability regimes that incentivize concealment rather than transparency. Instead, purpose-driven compliance emphasizes ethical framing, employee voice, and organizational learning. Compliance should never be Dr. No, sitting in the Department of Business Non-Development.

The examples of Wells Fargo and Novartis are instructive. Both organizations suffered repeated compliance failures under enforcement-driven regimes. Their subsequent reforms went beyond addressing the specific violations that triggered enforcement. They re-examined culture, leadership incentives, and ethical expectations. In Novartis’s case, tying bonuses to ethical performance and co-creating a new code of ethics signaled a shift from box-checking to values anchored in purpose.

Why Purpose-Driven Compliance Matters for the Modern CCO

For today’s chief compliance officer, Martinez believes purpose-driven compliance offers three critical benefits.

First, it creates durability. Enforcement priorities shift with administrations. Indeed, this Administration has signaled a cutback in white-collar enforcement by offering essentially get-out-of-jail-free cards to companies that self-disclose early. This underscores the importance of compliance programs. A compliance program anchored solely in regulatory expectations will always be reactive. Purpose-driven programs are more stable because they are tied to the organization’s identity rather than external politics.

Second, it improves the quality of compliance metrics. Measuring effectiveness against internal standards allows organizations to ask harder questions about culture, decision-making, and root causes. Not every initiative will succeed, but a willingness to acknowledge failure is itself a sign of program maturity.

Third, it enhances credibility with boards and senior leadership. When compliance is framed as a strategic partner in achieving the organization’s mission, rather than as a defensive function, it earns a more meaningful seat at the table.

Conclusion

Compliance has never been more sophisticated, expensive, or visible. Yet sophistication alone does not guarantee effectiveness. Martinez’s Purpose-Driven Compliance challenges compliance professionals to rethink the foundations of their programs. Enforcement-driven compliance has taken us far, but it cannot take us far enough.

The next evolution of compliance requires organizations to define their own standards of conduct, grounded in purpose, risk, and ethics. That shift is not easy. It requires courage from compliance leaders and commitment from boards and executives. But if compliance is truly about preventing harm and sustaining trust, purpose-driven compliance is not optional. It is essential.

Categories
AI Today in 5

AI Today in 5: February 9, 2026, The AI Agents Doing Compliance Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. What to do when AI is forced on compliance. (CW)
  2. Napier AI/AML report is out. (FinTechGlobal)
  3. AI and the accountability gap. (FinTechGlobal)
  4. Where AI is tearing through corporate America. (WSJ)
  5. Goldman is letting AI Agents do compliance. (PYMNTS)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Blog

From Principle to Proof: Operationalizing AI Governance Through the ECCP and NIST

Artificial intelligence governance has officially crossed the threshold from theory to expectation. The Department of Justice has not issued a standalone “AI rulebook,” but it has provided a framework for compliance professionals to consider the issue: the 2024 Evaluation of Corporate Compliance Programs (ECCP). In this version of the ECCP, the DOJ laid out guidance that any technology capable of creating material business risk must be governed, monitored, and improved like any other compliance risk. That includes artificial intelligence.

Too many organizations still treat AI governance as an ethics exercise, a technical problem, or a future concern. That posture is not defensible. The DOJ does not ask whether your program is fashionable or aspirational. It asks three very old-fashioned questions: Is your compliance program well designed? Is it applied in good faith? Does it work in practice? Those questions apply with full force to AI.

In this post, I want to move the discussion from abstract frameworks to operational reality. I will show how compliance professionals can use the ECCP to structure AI governance, select board-grade KPIs, and demonstrate effectiveness in a way regulators understand. I will also show how the NIST AI Risk Management Framework (NIST Framework) fits neatly underneath this structure as an operating model, not a competing philosophy.

AI Governance Is Already an ECCP Issue

The DOJ has repeatedly emphasized that compliance programs must evolve as business risks evolve. Artificial intelligence is not a future risk. It is already embedded in pricing, hiring, credit decisions, customer interactions, fraud detection, and third-party screening. If an AI model can influence revenue, customer outcomes, or regulatory exposure, it is a compliance risk. Period.

The ECCP does not require companies to eliminate risk. It requires them to identify, assess, manage, and learn from it. AI governance, therefore, belongs squarely inside the compliance program, not off to the side in an innovation lab or technology committee.

The ECCP as an AI Governance Blueprint

The power of the ECCP is its simplicity. Every enforcement action ultimately traces back to the same three questions. Let us apply them directly to AI.

Is the Program Well Designed?

Design begins with risk assessment. If your organization cannot answer a basic question such as “What AI systems do we have, who owns them, and what decisions they influence,” you do not have a program. You have hope. A well-designed AI compliance program starts with an AI asset inventory that identifies models, tools, vendors, and use cases. Each asset must be risk-classified based on business impact, regulatory exposure, and potential harm.

Board-level KPIs here are coverage metrics. How many AI assets have been identified? What percentage has been risk-classified? How many high-impact models have completed an impact assessment before deployment? If your dashboard does not show near-full coverage, the design is incomplete.

Policies and procedures come next. The DOJ does not care how many policies you have. It cares whether they provide clear guidance for real decisions. AI policies should cover the full lifecycle, from design and data sourcing through deployment, monitoring, and retirement. A practical KPI is policy coverage. What percentage of AI assets operate under current, approved procedures? How often are those procedures refreshed? Annual updates are a reasonable baseline in a rapidly changing risk environment.

Is the Program Applied Earnestly and in Good Faith?

Good faith is demonstrated through action, not intent. Training is a central indicator. The DOJ expects role-based training tailored to actual risk. A generic AI awareness course does not meet this standard. Developers, model owners, compliance reviewers, and business leaders all require different training. Completion rates matter, but so does comprehension. Measuring post-training proficiency improvement is one of the clearest signals that training is more than a box-checking exercise.

Third-party risk management is another critical area. Many organizations rely on external models, data providers, or AI-enabled vendors. If you do not understand how those tools are built, governed, and updated, you are importing risk without controls. Strong programs use standardized AI diligence questionnaires, assign assurance scores, and require contractual safeguards for high-risk vendors. A board-ready KPI here is the percentage of high-risk AI vendors subject to enhanced diligence and contractual controls.

Mergers and acquisitions deserve special attention. AI risk does not wait for post-close integration. The DOJ has been explicit that pre-acquisition diligence matters. A defensible KPI is simple and unforgiving. 100% of acquisition targets with material AI usage must undergo AI due diligence before closing. Anything less invites inherited risk.

Does the Program Work in Practice?

This is where many programs fail. Paper controls do not impress regulators. Outcomes do. Incident reporting is a critical signal. A low number of reported AI issues may indicate fear, confusion, or a lack of safety rather than safety concerns. What matters is whether issues are identified, investigated, and resolved promptly. Mean time to investigate is a powerful metric. If AI-related concerns take months to resolve, the program is not working. Clear escalation paths, defined investigation playbooks, and documented root cause analysis are essential.

Continuous monitoring is equally important. High-risk AI systems must be monitored for performance drift, data changes, and unintended outcomes. The DOJ expects companies to use data analytics to test whether controls are functioning. KPIs here include validation pass rates before deployment, drift-detection coverage for critical models, and corrective action closure rates. These are not technical vanity metrics. They are evidence of effectiveness.

Where NIST Fits and Why It Matters

The NIST AI Risk Management Framework does not compete with the ECCP. It operationalizes it. The ECCP tells you what regulators expect. NIST helps you implement those expectations across governance, mapping, measurement, and management. For example, ECCP risk assessment aligns with NIST’s mapping function. ECCP’s continuous improvement aligns with NIST’s measurement and management functions. Using NIST terminology creates a shared language across compliance, legal, security, and data science teams. That shared language is governance in action.

Reporting AI Risk to the Board

Boards do not want technical detail. They want assurance. The most effective AI governance dashboards focus on a small set of indicators that answer the DOJ’s three questions: coverage, quality, responsiveness, and learning. Examples include the percentage of AI assets risk-classified, validation pass rates, investigation cycle times, and corrective action closure rates. When these metrics move in the right direction, they tell a credible story of control. More importantly, they show that compliance is not reacting to AI. It is governing it.

Five Key Takeaways for Compliance Professionals

  1. AI as Risk. Artificial intelligence is already within the scope of the ECCP. If AI can influence business outcomes, it must be governed like any other compliance risk.
  2. Risk Management Program. A well-designed AI compliance program begins with complete asset identification and risk classification. Coverage metrics are the first signal regulators will examine.
  3. Implementation. Good faith implementation is demonstrated through role-based training, disciplined third-party oversight, and pre-acquisition AI diligence. Intent without execution does not count.
  4. Outcomes, not Inputs. Effectiveness is proven through outcomes. Investigation speed, monitoring coverage, and corrective action closure rates matter more than policy volume.
  5. Complementary. The NIST Framework complements the ECCP by providing an operating model that compliance, legal, and technical teams can share. Together, they turn principles into proof.

Final Thoughts

AI governance is not about predicting the future. It is about demonstrating discipline in the present. The DOJ is not asking compliance professionals to become data scientists. It is asking us to do what they have always done well: identify risk, establish controls, test effectiveness, and improve continuously. The ECCP already gives you the framework. The only question is applying it.

Categories
Blog

Roman Philosophers and the Foundations of a Modern Compliance Program: Part 5 – Lucretius, Rationality, and Continuous Improvement in Compliance

Welcome to our concluding blog post on notable Roman Philosophers and the philosophical underpinnings of modern corporate compliance programs and compliance professionals, focusing on five philosophers from Rome spanning the end of the Roman Republic to the Roman Empire.

We have considered Cicero and the duty, law, and the moral limits of business; Seneca on power, pressure, and ethical decision-making under stress; Varro on corporate governance; and Marcus Aurelius on ethical leadership and tone at the top. Today, we conclude with Lucretius to explore rationality, fear, and risk perception.

I. Lucretius in Context: Seeing the World Clearly

Titus Lucretius Carus is the outlier in the Roman philosophical tradition, and that is precisely why he matters to compliance professionals. In De Rerum Natura (On the Nature of Things), Lucretius set out to explain the world as it actually is, stripped of superstition, fear, and comforting illusions. He believed that human suffering and bad decision-making were driven less by malice than by misunderstanding.

Lucretius lived in a Roman world gripped by fear of divine punishment, fate, and unseen forces. He argued that when people attribute events to superstition or rumor rather than observation and evidence, they lose the ability to respond rationally. Fear, in his view, was the enemy of clear judgment. Only through disciplined observation and reason could individuals and institutions act wisely.

For modern compliance professionals, Lucretius offers a final and essential lesson. Even the best-designed compliance program, staffed by accountable individuals and supported by ethical leadership, will fail if it cannot see itself clearly. Programs that rely on assumptions, anecdotes, or reputation rather than evidence inevitably drift. Lucretius teaches that rational observation is not merely a scientific virtue. It is an ethical one.

II. The Compliance Problem Lucretius Illuminates: Blind Spots and Compliance Theater

Many compliance programs operate on belief rather than proof. Leaders believe the culture is strong. Boards believe controls are effective. Compliance teams believe training is working. Yet enforcement actions routinely reveal blind spots that persisted for years, unnoticed or unchallenged. This gap between belief and reality is what Lucretius would have called superstition. In compliance, it takes the form of compliance theater: dashboards that look reassuring, certifications that go unquestioned, and metrics that measure activity rather than effectiveness.

The DOJ Evaluation of Corporate Compliance Programs (ECCP) repeatedly asks whether companies test, monitor, and improve their programs. Prosecutors are explicit that assumptions are insufficient. They want evidence that the program detects misconduct, adapts to change, and evolves based on lessons learned. Fear plays a central role here. Organizations fear discovering problems. They fear bad news reaching the board. They fear regulatory scrutiny. Lucretius warned that fear distorts perception. In compliance terms, fear leads to underreporting, superficial audits, and avoidance of uncomfortable data.

A compliance program that cannot tolerate evidence of weakness cannot improve. Lucretius insists that rational systems must prefer truth over comfort.

III. Modern Corporate Application: Lucretius, DOJ Expectations, and Evidence-Based Compliance

Applying Lucretius to modern compliance highlights the central role of monitoring, testing, and continuous improvement.

First, compliance monitoring must focus on effectiveness, not volume. Counting training completions or hotline calls says little about whether the program works. Lucretius would insist on asking harder questions. Are issues detected early? Are repeat risks declining? Are controls changing behavior?

Second, data must be interpreted without fear. DOJ guidance emphasizes learning from misconduct and near misses. Yet many organizations treat incidents as anomalies rather than signals. Lucretius teaches that patterns matter more than isolated events. Compliance teams should analyze trends across regions, functions, and time, even when results are uncomfortable.

Third, programs must adapt to changing risk. Lucretius rejected static explanations of the world. The DOJ similarly asks whether compliance programs evolve as business models, markets, and technologies change. A program designed for yesterday’s risks becomes a liability when conditions shift.

Fourth, monitoring must include culture and behavior, not just transactions. Culture surveys, exit interviews, and speak-up analytics provide insight into employees’ trust in the system. Lucretius would caution against ignoring qualitative data simply because it is harder to measure.

Fifth, continuous improvement must be documented and demonstrable. The DOJ evaluates whether companies close the loop by updating controls, training, and governance in response to findings. Rational compliance requires not only seeing clearly but acting on what is seen.

Finally, compliance leaders must resist narrative-driven assurance. Statements such as “this has never happened before” or “we trust our people” are not evidence. Lucretius reminds us that trust is strengthened, not weakened, by verification.

IV. Key Takeaways for Compliance Professionals

1. Father of CM/CI. Compliance professionals should view Lucretius as the philosophical foundation of monitoring and continuous improvement. Lucretius grounds compliance in disciplined observation rather than comfort or tradition. He reminds compliance professionals that a program cannot improve what it refuses to examine honestly. Monitoring and continuous improvement are not technical exercises but ethical commitments to see the organization as it truly operates.

2. Fact-based. Compliance should privilege evidence over assumption. Assumptions about culture, control effectiveness, or employee behavior create blind spots that persist until a failure forces attention. Lucretius warns that belief without verification is a form of self-deception. An effective compliance program insists on data, testing, and validation rather than reassurance.

3. Measure outcomes, not activity. Compliance should design metrics that measure effectiveness, not activity. Counting trainings delivered or policies acknowledged does not demonstrate that misconduct is being prevented or detected. Lucretius would reject metrics that comfort leadership without revealing reality. Compliance metrics must answer whether controls change behavior and reduce risk, not merely whether processes occurred.

4. Information is data. Compliance should treat incidents and near misses as data, not embarrassment. Organizations often hide or minimize incidents out of fear of reputational harm or internal scrutiny. Lucretius teaches that fear distorts judgment and delays learning. A mature compliance program uses incidents and near misses as signals for improvement rather than reasons for denial.

5. Risks Change. Compliance should evolve as risks, markets, and technologies change. Static compliance programs assume the world remains stable, an assumption Lucretius would view as fundamentally irrational. This is certainly not true in the age of Trump. Business models, geopolitical risks, and technologies shift faster than policy cycles. Continuous adaptation is the only rational response to an environment in constant motion.

6. Embrace Observation. Compliance should embrace rational observation as an ethical obligation. Seeing clearly is not morally neutral; it is a responsibility owed to stakeholders and institutions. Lucretius argued that ignorance sustained by fear causes harm. In compliance, choosing not to look is itself an ethical failure.

7. Evidence-based. Finally, Lucretius teaches that organizations fail not because reality is unknowable, but because they choose not to look. This is the capstone lesson of the compliance lifecycle. Organizations that avoid uncomfortable facts drift into compliance theater and false confidence. Rational, evidence-based compliance treats truth as an asset, even when it reveals weakness.

V. Conclusion: Roman Philosophy and the Compliance Program That Actually Works

Taken together, these five Roman philosophers describe the full lifecycle of a modern compliance program as it exists in the real world, not as it appears in policy manuals. Cicero establishes why compliance must exist at all, grounding the program in duty rather than expediency and reminding organizations that law is only the starting point. Seneca then confronts the reality that ethical commitments are tested under pressure, exposing how fear, ambition, and rationalization undermine even well-designed systems. Epictetus moves the analysis to the individual, insisting that ethical responsibility does not disappear inside hierarchy and that compliance ultimately depends on personal agency. Marcus Aurelius elevates that responsibility to leadership, showing how culture is formed through example and how ethical expectations live or die by the behavior of executives. Finally, Lucretius closes the loop, demanding rational observation, evidence, and continuous improvement so that compliance programs do not drift into assumption, superstition, or complacency.

What makes the Roman philosophers uniquely valuable to compliance professionals is their focus on institutions, power, and human behavior under constraint. The Greeks gave us ethical ideals. The Romans showed us how those ideals survive, or fail, inside complex systems. This mirrors the Department of Justice’s modern approach to compliance, which increasingly evaluates not whether a program exists, but whether it operates, adapts, and functions under real-world conditions.

For the compliance professional, the lesson of this series is both sobering and empowering. No single control, policy, or training module is sufficient. Effective compliance requires ethical foundations, behavioral awareness, individual accountability, principled leadership, and disciplined monitoring working together as an integrated system. Remove any one of these elements, and the program weakens. Align them, and compliance becomes not a defensive function, but a durable governance capability.

In combining these Roman insights with the earlier Greek philosophical foundations, the compliance professional gains more than historical perspective. They gain a framework for building programs that withstand pressure, earn trust, and evolve. In the end, that is the measure of a compliance program that actually works.

Categories
Blog

Roman Philosophers and the Foundations of a Modern Compliance Program: Part 4 – Marcus Aurelius and Ethical Leadership

I recently wrote a series on the direct link between ancient Greek Philosophers and modern corporate compliance programs and compliance professionals. It was so much fun and so well-received that I decided to follow up with a similar series on notable Roman Philosophers. This week, we will continue our exploration of the philosophical underpinnings of modern corporate compliance programs and compliance professionals by looking at five philosophers from Rome, both from the BCE and AD eras.

We have considered Cicero and the duties, law, and moral limits of business; Seneca on power, pressure, and ethical decision-making under stress; and Varro on corporate governance. Today, we consider Marcus Aurelius and ethical leadership and tone at the top. Tomorrow, we will conclude with Lucretius to explore rationality, fear, and risk perception. Today, we continue with Marcus Aurelius, Ethical Leadership, and Culture as a Compliance Control

I. Marcus Aurelius in Context: Power with Restraint

Imagine you are the single most powerful person on earth. Are you going to be an unrepentant narcissist in the manner of Donald Trump, who believes he should govern on his own twisted morality based simply on ‘gut instinct’? Or are you going to take a different approach, set out your reasoned approach to governing in a book, and then govern with the moral authority of thousands of years of philosophy?

Marcus Aurelius is often remembered as the philosopher-king, but that description understates the difficulty of his position. He ruled the Roman Empire during a period of war, plague, economic strain, and political instability. Unlike many philosophers, Marcus Aurelius did not write for an audience. His Meditations were private reflections, written to discipline his own thinking while exercising absolute power.

This matters for compliance professionals. Marcus Aurelius did not theorize about ethical leadership from a distance. He lived inside it. He understood that power magnifies temptation, insulates leaders from feedback, and creates opportunities for self-deception. His philosophy is therefore preoccupied with restraint, humility, consistency, and responsibility.

Marcus repeatedly reminded himself that leadership is not a privilege but a burden. Authority did not entitle him to indulgence; it imposed higher expectations. He believed that leaders set moral boundaries through conduct long before they issue instructions. In modern terms, Marcus Aurelius understood that culture flows downward from leadership behavior rather than upward from policy documents.

II. The Compliance Problem Marcus Aurelius Illuminates: Culture Eats Controls

One of the central lessons of modern compliance enforcement is that formal controls cannot compensate for poor culture. Organizations with detailed policies and sophisticated monitoring still fail when leadership behavior signals that results matter more than integrity. The DOJ Evaluation of Corporate Compliance Programs (ECCP) explicitly asks whether senior leaders demonstrate commitment to compliance through actions, not words. Regulators assess whether ethical behavior is encouraged, whether misconduct is addressed consistently, and whether leaders tolerate or reward problematic conduct.

Marcus Aurelius would recognize this dynamic immediately. He believed that people learn how to behave by observing those in power. When leaders act inconsistently with stated values, cynicism follows. When leaders rationalize misconduct, that rationalization spreads. Compliance programs often falter when leadership treats ethics as a communication exercise rather than a lived expectation. Codes of conduct and training sessions cannot overcome the daily signals sent by executive decisions, incentive structures, and responses to failure.

Marcus teaches that culture is not accidental. It is created continuously by leadership choices, especially under pressure.

III. Modern Corporate Application: Marcus Aurelius, DOJ Expectations, and Leadership Accountability

Applying Marcus Aurelius to modern compliance reveals several concrete expectations that closely align with DOJ guidance.

First, leadership behavior must be consistent. Marcus believed hypocrisy was corrosive to authority. The DOJ similarly evaluates whether leaders follow the same rules they impose on others. Exceptions for senior executives undermine program credibility and weaken deterrence.

Second, leadership must respond to misconduct with moral clarity. Marcus wrote that anger and denial cloud judgment. In compliance terms, this means addressing issues promptly, transparently, and proportionately. Delayed or defensive responses signal tolerance, even when discipline eventually occurs.

Third, middle management matters. Marcus understood that culture is transmitted through layers of authority. DOJ guidance emphasizes the role of middle managers as culture carriers. Compliance programs should equip managers with the tools and incentives to reinforce ethical behavior, not merely deliver targets.

Fourth, incentives must reflect values. Marcus warned against leaders who chase reputation or reward at the expense of principle. Modern compliance programs must ensure compensation structures do not reward outcomes achieved through questionable means. The DOJ has repeatedly cited incentive misalignment as a root cause of misconduct.

Finally, leadership must create psychological safety. Marcus believed leaders should listen more than they speak. In compliance terms, this translates into openness to bad news, encouragement of dissent, and protection for those who raise concerns. A culture that punishes truth-telling cannot sustain compliance.

IV. Key Takeaways for Compliance Professionals

1. The Blueprint. Compliance professionals should view Marcus Aurelius and his writings as the blueprint for culture-based compliance. You can draw a direct line from the Meditations to both your compliance program and the leadership skills a CCO needs. Compliance should evaluate leadership behavior as a primary control, not a soft factor. This means not only reviewing employees who are promoted to management, but also a deep dive into their backgrounds. Also, thorough due diligence for any senior management hires from outside your organization.

2. Higher Standards. Compliance should hold senior leaders to higher standards of consistency and accountability.

3. Institutional Justice. Compliance should focus on how leaders respond to misconduct, not just how they prevent it. This is the CCO’s charge, and it must include an institutional fairness component in your compliance program.

  1. Compliance should ensure incentives reinforce ethical behavior at every level. The DOJ has consistently discussed the role of incentives in any compliance program, as far back as the 1st edition of the FCPA Guidance in 2012.
  2. Compliance should treat culture as an operational risk area subject to oversight and testing. Culture should be assessed, monitored, and improved. Simply because it is seen as a ‘soft’ part of an organization does not mean it should be treated differently.

4. Walk the Walk. Finally, Marcus Aurelius reminds us that ethical leadership is not performative. It is visible, daily, and decisive. In organizations, culture follows leadership long before it follows policy.

V. Conclusion

Marcus Aurelius brings the compliance lifecycle to its cultural apex. He shows that leadership behavior is not merely influential but determinative, shaping whether ethical expectations are taken seriously or quietly dismissed. Yet even the strongest ethical culture is not self-sustaining. Leaders are human, memory fades, and good intentions erode without reinforcement. This is where culture must be supported by systems that observe, test, and correct.

Marcus Aurelius teaches us how leaders should behave; Lucretius challenges us to examine how organizations think. If Marcus focuses on moral example, Lucretius turns our attention to rational observation, warning against fear, superstition, and self-deception. The transition from Marcus Aurelius to Lucretius mirrors the shift from cultural leadership to continuous improvement, from ethical intent to empirical verification. In compliance terms, it is the move from assuming the program works to proving that it does, using data, monitoring, and clear-eyed analysis rather than hope or habit.

Join us tomorrow for our concluding article on Lucretius and Rationality in Monitoring and Continuous Improvement. We will consider where culture gives way to systems, data, and the discipline of seeing risk clearly rather than through fear or superstition.

Categories
Blog

Roman Philosophers and the Foundations of a Modern Compliance Program: Part 2 Seneca on Pressure and Compliance

I recently wrote a series on the direct link between ancient Greek Philosophers and modern corporate compliance programs and compliance professionals. It was so much fun and so well-received that I decided to follow up with a similar series on notable Roman Philosophers. This week, we will continue our exploration of the philosophical underpinnings of modern corporate compliance programs and compliance professionals by looking at five philosophers from Rome, both from the Roman Republic and the Roman Empire.

Yesterday, we considered Cicero and the duty, law, and the moral limits of business; today, we will look at Seneca and power, pressure, and ethical decision-making under stress; upcoming blog posts include Marcus Aurelius and ethical leadership and tone at the top; Varro and corporate governance; and Lucretius to explore rationality, fear, and risk perception. Today, we continue with Seneca on pressure and when compliance matters the most.

I. Seneca in Context: Ethics from Inside Power

Lucius Annaeus Seneca did not write philosophy from a safe distance. He lived at the center of Roman power, wealth, and danger. As tutor and later advisor to Emperor Nero, Seneca understood how quickly ethical intentions could be compromised by fear, ambition, loyalty, and survival. He also understood how people justify those compromises to themselves.

Seneca’s writings, particularly Letters from a Stoic and On Anger, are not abstract moral treatises. They are practical examinations of how human beings behave when placed under stress. He was deeply concerned with emotional excess, not because emotions were immoral, but because unchecked emotion distorts judgment. Anger, fear, greed, and the desire for approval all lead otherwise rational people to make decisions they later defend as necessary.

For Seneca, ethical failure was rarely sudden. It was incremental. People crossed lines not because they intended to be corrupt, but because they convinced themselves that circumstances demanded flexibility. This insight makes Seneca indispensable to the modern compliance professional, whose greatest challenge is not policy design, but behavior under pressure.

II. The Compliance Problem Seneca Illuminates: Rationalization Under Stress

Most compliance programs are designed around rules, controls, and reporting structures. Far fewer are designed with human psychology in mind. Seneca would argue that this is a critical oversight. Modern compliance failures often occur in high-pressure environments: aggressive sales targets, looming deadlines, competitive markets, political instability, or financial distress. In these moments, individuals do not typically reject ethical norms outright. Instead, they rationalize deviations as temporary, necessary, or harmless.

Common rationalizations include:

  • “This is how business is done here.”
  • “We will fix it later.”
  • “No one is really harmed.”
  • “Leadership expects results.”
  • (and my personal favorite) “We’ve always done it this way.”

Seneca warned that these internal narratives are more dangerous than ignorance. Once people justify unethical conduct to themselves, external controls become less effective. A policy cannot compete with a story someone tells themselves to preserve status, income, or safety. The DOJ, particularly in its various iterations of the Evaluation of Corporate Compliance Programs (ECCP), has increasingly focused on this dynamic. In recent enforcement actions, regulators have emphasized root-cause analysis, asking not only what rule was broken but also why individuals felt compelled to break it. Pressure, incentives, and cultural signals consistently appear as contributing factors.

Seneca teaches that compliance programs must anticipate rationalization. It is not enough to say “do not do this.” Organizations must understand when and why people will convince themselves that doing it is acceptable.

III. Modern Corporate Application: Seneca, DOJ Expectations and Behavioral Compliance

The ECCP explicitly asks whether a company’s risk assessment and controls account for “the types of misconduct most likely to occur” and whether the company has “addressed the root causes of misconduct.” These questions align directly with Seneca’s insights. Consider major enforcement actions involving systemic bribery, fraud, or manipulation of controls. In cases such as the Wells Fargo fraudulent accounts scandal or the Volkswagen emissions testing scandal, both of which involved employees operating under intense performance pressure. While not all wrongdoing can be excused by culture, regulators repeatedly noted environments where employees felt trapped between expectations and ethics.

A Seneca-informed compliance program would focus on several practical measures.

First, risk assessments should explicitly identify pressure points. Compliance should map where incentives, deadlines, or market conditions increase the likelihood of rationalization. This includes sales functions, third-party relationships, emerging markets, and crises.

Second, training should move beyond rules into scenario-based discussions. Seneca believed self-awareness was an ethical discipline. Modern compliance training should confront common rationalizations directly, helping employees recognize them before they take hold. DOJ guidance increasingly favors practical, tailored training over generic training.

Third, escalation pathways must be realistic under stress. A hotline that exists only on paper will not be used when fear of retaliation or failure dominates. Seneca understood that fear silences conscience. Effective compliance programs must demonstrate that speaking up under pressure is protected, valued, and acted upon.

Fourth, leadership messaging matters most during crises. Seneca warned that leaders set moral boundaries through behavior, not speeches. The DOJ has emphasized that how management responds to misconduct is a key indicator of program effectiveness. When leaders excuse results achieved through questionable means, rationalization spreads quickly.

Finally, compliance must be present before the crisis, not introduced afterward. Seneca would view reactive compliance as inherently weak. Ethical resilience must be built in advance, when judgment is clear, and stakes are lower.

Key Takeaways for Compliance Professionals

1. Behavioral Risk. Compliance professionals should view Seneca as a guide to behavioral risk, not philosophical pessimism. Seneca focuses on how real people behave under pressure rather than on abstract ethical ideals. He recognizes that stress, fear, ambition, and loyalty distort judgment long before formal rules are broken. For compliance professionals, Seneca provides a framework for understanding why misconduct occurs even in organizations with well-designed programs.

2. Pressure Points. Compliance should identify and manage pressure points where rationalization thrives. High-performance targets, crises, and competitive markets create environments where ethical shortcuts are easily justified. Seneca teaches that rationalization flourishes when people feel trapped between expectations and consequences. Compliance programs must proactively map and mitigate these pressure points rather than react after misconduct occurs.

3. Training Design. Compliance should design training that addresses how people actually make decisions under stress. Traditional rule-based training assumes calm, rational decision-making, which rarely occurs in real-world situations. Seneca reminds us that ethical failure often occurs in moments of emotional intensity rather than in deliberation. Effective compliance training should use scenarios and realistic dilemmas that reflect pressure, ambiguity, and competing incentives.

Compliance should ensure escalation mechanisms work when fear and incentives collide. A hotline or reporting channel is ineffective if employees do not trust it during high-risk moments. Seneca understood that fear silences conscience and discourages disclosure. Compliance programs must test whether escalation pathways function when the personal cost of speaking up feels high.

4. Leadership Engagement. Compliance should engage leadership on how their responses to pressure shape ethical behavior. Leaders signal ethical boundaries most clearly when responding to setbacks, failures, or missed targets. Seneca warned that inconsistent or emotionally driven leadership responses accelerate ethical decay. Compliance professionals must ensure leaders understand that their reactions under pressure become cultural instruction.

  • Compliance should focus on prevention through awareness, not punishment after failure. Seneca emphasized self-awareness as the first defense against moral error. Compliance messaging that only appears after misconduct reinforces fear rather than learning. Ongoing communication about pressure, rationalization, and ethical expectations strengthens resilience before problems arise.
  • Finally, Seneca instructs us that ethical systems fail not because people abandon values, but because they convince themselves that those values can wait. A compliance program that ignores pressure is a program designed to fail when it matters most. Rationalization is the quiet mechanism through which ethical erosion occurs. Seneca shows that delay, exception-making, and “temporary” compromises accumulate into systemic failure. Compliance programs that do not confront rationalization directly leave themselves exposed at their most vulnerable moments.

Conclusion

Seneca exposes the internal dynamics that cause compliance programs to fail under pressure. He shows us how fear, ambition, and rationalization erode ethical judgment, even when rules are clear and controls are in place. But Seneca largely examines the problem from the inside out, focusing on how individuals respond to external forces. That analysis leads directly to the next question in the compliance lifecycle: what responsibility does the individual retain when pressure is real, and authority is unequal? This is where Seneca gives way to Epictetus.

Join us tomorrow as we explore Varro and corporate governance for your compliance regime.