Categories
Great Women in Compliance

Great Women in Compliance: Culture Check: Are Your Speak Up Channels Effective?

Ever wish you could benchmark your Speak Up channels against more than just volume, issue types, and time to close? 

The Speak Up Self-Assessment (SUSA) was designed to help you go deeper by assessing organizational infrastructure—including reporting channels, confidentiality safeguards, follow-up processes, and governance of whistleblowing systems.

In this roundtable episode, we speak with guests: 

  • Professor Jessica McManus Warnell
  • Dr. Mary Gentile 
  • Allison Narmi 

about the work they are doing to bring a free, anonymous diagnostic tool to self-assess speak-up channels. Building on the work done in the EU, our guests today are members of the project team that has developed an American version of the tool, with support from the Notre Dame Deloitte Center for Ethical Leadership. Link to the EU version here – https://edhec.az1.qualtrics.com/jfe/form/SV_eleMjkHraHzw6Hk

U.S. version coming soon.  

Categories
Daily Compliance News

Daily Compliance News: April 16, 2026, The Bribery is Legal in Illinois Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Blog

Culture, Speak-Up, and Human Judgment: The Human Side of AI Governance

Artificial intelligence may be built on data, models, and code, but governance ultimately rests on people. For boards and Chief Compliance Officers, one of the most important questions is not only whether the organization has responsibly approved AI tools, but also whether employees are prepared to challenge them, report concerns, and apply human judgment when something does not look right. In many organizations, the earliest warning system for AI failure is not a dashboard. It is the workforce.

Over the course of this series, I have explored four critical governance challenges in AI: board oversight and accountability, strategy outrunning governance, data governance and privacy, and ongoing monitoring. This final blog post turns to the fifth and most underappreciated challenge of all: culture, speak-up, and human judgment.

Underappreciated because organizations often begin AI governance with structure in mind. They build committees, draft policies, classify risks, and establish approval gates. All of that is necessary. But structure alone is not sufficient. If the human beings closest to the work do not understand their role in AI governance, do not feel empowered to raise concerns, or begin to defer too readily to machine-generated outputs. The governance framework will be weaker than it appears on paper.

This is the point many companies miss. AI governance is not only about the technology. It is about whether the organization’s culture supports the responsible use of technology.

Employees Will See AI Failures First

In many companies, the first person to notice an AI problem will not be a board member, a Chief Executive Officer, or even a member of the governance committee. It will be an employee interacting with the tool in daily operations. It may be the customer service representative who sees the system generating inaccurate responses. It may be the HR professional who notices troubling patterns from an AI-supported screening tool. It may be the sales employee who sees a generative tool overstating product claims. It may be the finance professional who questions an automated summary that does not match underlying records. It may be the compliance analyst who sees a tool being used for an unapproved purpose.

That matters because early visibility is one of the most valuable protections a company can have. But visibility only becomes a control if employees know what to do with what they see. That is why culture is a governance issue. A workforce may spot the problem, but if employees do not understand that AI-related concerns are reportable, are unsure where to raise them, or believe management will ignore them, the warning system fails.

For boards and CCOs, that means AI governance cannot stop at policy creation. It must extend into behavior, reporting norms, and organizational trust.

Speak-Up Culture Is an AI Governance Control

Compliance professionals have long known that a speak-up culture is a control. It is often the first way a company learns of misconduct, process breakdowns, weak supervision, retaliation, harassment, fraud, or control evasion. The same principle now applies with equal force to AI.

Employees may observe biased outputs, inaccurate recommendations, privacy concerns, unexplained model behavior, misuse of tools, inappropriate reliance on machine-generated content, or efforts to bypass required human review. If they do not report those concerns, management may have no timely way to know what is happening.

This is where the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) remains highly instructive. The ECCP places substantial emphasis on whether employees are comfortable raising concerns, whether the company investigates them appropriately, and whether retaliation is prohibited in practice. Those same questions should now be asked in the context of AI. Does the company’s reporting framework explicitly include AI-related concerns? Are managers trained to recognize and escalate those concerns? Are reports investigated with the same seriousness as other compliance issues? Are employees protected if they raise uncomfortable questions about a tool the business wants to use?

If the answer is no, the company may have AI procedures, but does not yet have embedded AI governance in its culture.

Human Judgment Cannot Be Optional

One of the most significant risks in AI governance is not simply that a model will be wrong. It is that people will stop questioning it. AI systems can produce outputs quickly, fluently, and with apparent confidence. That creates a powerful temptation for users to over-trust the tool. When a system sounds polished, appears efficient, and reduces workload, people may assume that its conclusions deserve deference. This is precisely where governance needs the corrective force of human judgment.

Human judgment cannot be treated as a ceremonial step or a paper requirement. It must be meaningful. That means the people reviewing AI outputs must have the authority, time, training, and confidence to challenge those outputs when needed. A human review requirement that exists only on paper is not much of a safeguard. If reviewers are overloaded, insufficiently trained, or culturally discouraged from slowing the process, the control may be largely illusory.

Boards should care about this because one of the easiest mistakes management can make is to describe human oversight in governance documents without testing whether it is functioning in practice. CCOs should care because this is a classic compliance problem. A control may be designed elegantly but fail in daily operations because the supporting culture is too weak to sustain it.

Training Must Change with AI

A company cannot expect good judgment around AI if it has not trained people on what good judgment looks like. That means AI training should go beyond technical usage instructions. Employees need to understand what risks may arise, what concerns are reportable, what approved use looks like, what prohibited use looks like, and why human challenge matters. Managers need additional training because they are often the first informal escalation point when an employee raises a concern. If managers dismiss AI concerns as overreactions, inconveniences, or resistance to innovation, the speak-up system will quickly lose credibility.

Training should also be role-based. The risks faced by a customer-facing team may differ from those faced by teams in HR, legal, procurement, marketing, finance, or internal audit. A generic AI training module may create awareness, but it will not create the operational judgment needed in high-risk areas.

This is where the NIST AI Risk Management Framework provides practical value. NIST’s emphasis on governance is not limited to formal structures. It contemplates culture, accountability, and the need for organizations to support informed decision-making across the enterprise. ISO/IEC 42001 similarly reinforces the importance of organizational competence, awareness, and defined responsibilities. Both frameworks point to a critical truth: responsible AI use depends not only on controls over the technology, but also on the capabilities of the people who use and oversee it.

Managers Matter More Than Companies Often Realize

If culture is the operating environment of governance, managers are often its most important local translators. An employee may not begin by filing a formal report. More often, an employee may raise a concern informally with a supervisor or colleague. “This output does not seem right.” “I do not think we should be using it this way.” “This seems to be pulling in sensitive information.” “This recommendation may be biased.” “The human review is not really happening anymore.”

The manager’s response in that moment matters enormously. Does the manager take the concern seriously? Does the manager know it should be escalated? Does the manager see it as a governance issue or as resistance to efficiency? Does the manager understand the difference between a minor usability complaint and a potentially significant compliance concern?

This is why boards and CCOs should not think about speak-up solely in hotline terms. AI governance depends on the broader management culture. If supervisors are not equipped to receive and escalate AI concerns appropriately, many issues will die in the middle of the organization before they ever reach a formal channel.

Anti-Retaliation Must Be Real in the AI Context

There is another dimension that cannot be overlooked: the risk of retaliation. In some organizations, employees may hesitate to raise AI concerns because they fear being labeled anti-innovation, obstructionist, or not commercially minded. That creates a subtle but serious governance risk. If the corporate atmosphere celebrates rapid AI adoption without equally celebrating responsible challenge, then employees may conclude that silence is safer than candor.

This is why anti-retaliation messaging must be explicit in the AI context. The company should make clear that raising concerns about inaccurate outputs, misuse, privacy risks, unfairness, or control breakdowns is part of responsible business conduct. It is not a failure to embrace innovation. It is a contribution to the effective governance of innovation.

The CCO should ensure that AI-related concerns are incorporated into existing anti-retaliation frameworks, investigations protocols, and communications. Boards should ask whether employee sentiment data, hotline trends, and internal investigations provide any signal that people are reluctant to question AI initiatives. If the organization is moving aggressively on AI, it should be equally serious about protecting those who raise governance concerns about it.

Documentation and Escalation Still Matter

As with every other aspect of AI governance, culture and judgment must be integrated into the process. A company should document how AI-related concerns can be reported, how they are triaged, who reviews them, what escalation triggers apply, and how resolutions are tracked. Concerns about AI should not be dismissed as vague general complaints. They should be reviewable and analyzable over time.

This is essential not only for accountability but for learning. Patterns in employee concerns may reveal weaknesses in training, design, vendor management, access controls, or oversight. A single report may be an isolated event. Repeated concerns within a single function may point to a systemic governance problem. That is why speak-up is not just about receiving reports. It is about turning those reports into organizational intelligence.

The ECCP again offers a useful framework. It asks whether investigations are timely, whether root causes are examined, and whether lessons learned are fed back into the compliance program. AI governance should work the same way. A reported concern should not end with a narrow answer to the immediate complaint. It should prompt management to ask what the issue reveals about the broader governance environment.

Boards Must Model the Right Tone

This final point may be the most important. Culture is shaped by what leadership rewards, tolerates, and asks about. If the board only asks about AI efficiency, adoption, and speed, management will take the signal. If the board asks whether employees are raising concerns, whether human oversight is meaningful, whether managers are trained, and whether retaliation protections are working, management will take that signal as well.

For CCOs, this is a vital opportunity. The compliance function can help boards understand that governance is not only about structure and controls, but also about whether the organization has preserved the human capacity to question, escalate, and correct. In the AI context, that may be the most important governance capability of all.

Because in the end, even the most advanced system will not govern itself. An enterprise must govern it. That requires culture. It requires trust. It requires the courage to speak up. And it requires strong human judgment to look at an impressive output and still ask, “Is this right?”

The Human Side of Governance Is the Decisive Side

This final article brings the series back to a simple truth. AI governance is not only about what the company builds. It is about how the company behaves.

Boards may establish oversight. Management may create structures. Compliance may build controls. But if employees are not prepared to report concerns or exercise judgment, the organization will remain vulnerable. A strong AI governance program does not merely control the system. It empowers the people around the system to challenge it responsibly.

That is the human side of governance, and in many ways it is the decisive side. 

Categories
Daily Compliance News

Daily Compliance News: March 31, 2026, The Why Did She Leave Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • HR leaders say we are misusing the term ‘Agentic AI.’ (WSJ)
  • Senator wants to know why the SEC Director of Enforcement left. (Reuters)
  • UK fines Apple sub for breaching Russia sanctions. (FT)
  • Better get your culture right. (NYT)
Categories
Great Women in Compliance

Great Women in Compliance: Reflections on Investigations, Culture and the Future

In this episode of Great Women in Compliance, Lisa Fine speaks with Becky Rohr, Chief Compliance Officer and Head of Investigations at Ericsson. Becky talks about how her career journey led her to join Ericsson during a monitorship to strengthen their investigations function.

To do that, she focused on conducting fair, thorough, and efficient investigations, enhancing investigator training, and improving processes for collecting and reviewing digital evidence within a global organization. This led to her being named Chief Compliance Officer at Ericsson and to the benefits of integrating investigations and compliance.  Not only did this lead to the continued evolution of their compliance function, but it also connected hotline reports, investigations, and remediation by using creative approaches to reinforcing ethics at Ericcson.

Lisa and Becky also discuss how the Ericcson team has addressed workplace misconduct globally, sustaining compliance improvements after a monitorship ends, and the importance of leadership communication in maintaining a strong ethical culture.

The conversation also touches on culture change, addressing workplace misconduct globally, and how organizations can sustain strong compliance programs even after regulatory oversight ends.

Finally, Becky reflects on her decision to leave Ericsson and take a “power of the pause” moment before deciding on her next chapter—an approach that highlights the value of reflection and intentional career choices.

Categories
Innovation in Compliance

Innovation in Compliance – Healthcare Compliance: Fraud, Waste & Abuse, Culture, and Data-Driven Risk Management with Evan Sampson

Innovation occurs across many areas, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode,  host Tom Fox welcomes Evan Sampson, a noted health care compliance attorney.

Sampson traces his path from commercial litigation to representing healthcare practices on HIPAA/privacy and reimbursement matters, then moving in-house at a network of plastic surgery centers, where he managed compliance focused on fraud, waste, and abuse, and on evolving out-of-network billing rules leading into the No Surprises Act. Sampson explains how compliance programs can create business value beyond risk mitigation by uncovering inefficiencies and opportunities, such as identifying downcoding in medical billing and using complaint investigations to spot growth areas. He describes how his litigation background helps him anticipate how issues will unfold over time in investigations and litigation, thereby improving his credibility with business leaders. They discuss building a culture of compliance in fast-growing healthcare organizations, tracking regulatory changes across primary and secondary sources, and leveraging AI and data analytics to detect claim outliers and strengthen compliance.

Key highlights:

  • Healthcare Compliance Shift
  • Fraud, Waste, and Abuse
  • Compliance Creates Value
  • Building Compliance Culture
  • Tracking Regulatory Changes
  • AI in Compliance Analytics

Resources:

Evan Sampson on LinkedIn

Post & Schell

Innovation in Compliance was recently honored as the Number 4 podcast in Risk Management by 1,000,000 Podcasts.

Categories
All Things Investigations

ATI Podcast: Inhouse Insights – Building and Benefiting from a Culture of Compliance

Welcome to the inaugural episode of the newly rebranded ATI Podcast: Inhouse Insights—formerly known as All Things Investigations.

Presented by the Hughes Hubbard & Reed LLP Anti-Corruption & Internal Investigations Practice Group, this premiere episode sets the tone for a bold new chapter—bringing practical, in-house perspectives to today’s most pressing compliance challenges.

Host Michael DeBernardis welcomes Darryl Cyphers Jr., Senior Director of Legal Compliance at Klaviyo, for a candid and forward-looking conversation on how organizations can build—and sustain—a culture of compliance that actually works.

Together, they explore how compliance leaders can move beyond policies on paper to create real organizational impact—through measurable culture metrics, smarter use of AI to drive policy engagement, authentic tone at the top, and meaningful collaboration with HR and business partners. Darryl also shares practical guidance for navigating compliance gray areas and strengthening trust through continuous employee engagement and feedback.

Highlights include:

  • Defining a modern culture of compliance
  • Metrics and tools for measuring cultural effectiveness
  • Employee engagement and feedback that drive results
  • Building partnerships across HR and business teams
  • Innovative and engaging compliance training approaches
  • Navigating gray areas with confidence and credibility

Resources:

Hughes Hubbard & Reed Website

Klaviyo

Darryl Cyphers Jr. on LinkedIn

Categories
Blog

The Starliner, Culture and Compliance: Leadership Lessons from a NASA Investigation Report

Corporate compliance professionals spend a lot of time talking about controls, training, third parties, and investigations. Yet the hard truth is that the most important control environment sits above all of that: leadership behavior and the culture it creates. That is why this NASA investigation report on the Boeing CST-100 Starliner Crewed Flight Test (CFT) is such a useful case study. It is a technical report, to be sure. But it is also a cultural, leadership, and governance report. NASA’s bottom line is unambiguous: technical excellence and safety require transparent communication and clear roles and responsibilities, not as slogans, but as operating requirements that must be institutionalized so safety is never compromised in pursuit of schedule or cost.

If you are a Chief Compliance Officer, General Counsel, or business leader, you should read this report the way you read an enforcement action. Not to gawk. Not to assign blame. But to harvest lessons for your own organization before you have your own high-visibility close call.

The incident(s) that led to the report

The CFT mission launched June 5, 2024, as a pivotal step toward certifying Starliner to transport astronauts to the International Space Station. It was planned as an 8-to-14-day mission but was extended to 93 days after significant propulsion system anomalies emerged. Ultimately, the Starliner capsule returned uncrewed, while astronauts Barry “Butch” Wilmore and Sunita “Suni” Williams returned aboard SpaceX’s Crew-9 Dragon in March 2025. In February 2025, NASA chartered a Program Investigation Team (PIT) to examine the technical, organizational, and cultural factors contributing to the anomalies.

The report describes four major hardware anomaly areas, including Service Module RCS thruster fail-offs that temporarily caused a loss of 6 Degrees of Freedom control during ISS rendezvous and required in-situ troubleshooting to recover enough capability to dock, a Crew Module thruster failure during descent that reduced fault tolerance, and helium manifold leaks where seven of eight Service Module helium manifolds leaked during the mission. The PIT further determined that the 6DOF loss during rendezvous met criteria for a Type A mishap (or at least a high-visibility close call), underscoring how close the program came to a very different ending.

That is the “what.” For compliance professionals, the “so what” is that NASA did not treat this as a purely engineering problem. It treated it as an integrated system failure, in which culture and leadership either reduce risk or magnify it.

Lesson 1: Decision authority is culture, not paperwork

One of the report’s clearest threads is that fragmented roles and responsibilities delayed decision-making and eroded confidence. In the compliance world, unclear decision rights become the breeding ground for “informal governance”: private conversations, end-runs around committees, and decisions that are never fully documented. Over time, that becomes a shadow-control environment that your policies cannot touch.

Compliance action steps

  • Define decision rights for the riskiest calls (high-risk third parties, market entry, major remediation, critical incidents).
  • Require a short, written record of: facts reviewed, options considered, dissent captured, decision made, and owner accountable.
  • Separate “recommendation authority” from “approval authority” so everyone knows where they sit.

Lesson 2: Transparency is a control, and selective data sharing destroys trust

The report explicitly flags that the lack of data access fueled concerns about selective information sharing. Interviewees described frustration that information could be filtered, selectively chosen, or sanitized, which eroded confidence in the process and people. It also notes reports of questions being labeled “too detailed” or “out of scope” without mechanisms to ensure concerns were addressed. That is the compliance danger zone. When teams believe the narrative matters more than the data, they stop escalating early. They start documenting defensively. They seek safety in silence.

Compliance action steps

  • Build “open data” expectations into your incident response and investigative protocols.
  • Create a defined pathway for technical or subject-matter dissent to be logged, reviewed, and dispositioned.
  • Treat meeting notes and decisions as governed records, not optional artifacts.

Lesson 3: Risk acceptance without rigor becomes “unexplained anomaly tolerance”

NASA calls out “anomaly resolution discipline” and warns that repeated acceptance of unexplained anomalies without root cause can lead to recurrence. That single lesson belongs on a poster in every compliance office. In corporate terms, “unexplained anomalies” are recurring control exceptions, repeat hotline themes, repeated third-party red flags, and audit findings that are “managed” rather than fixed. If leadership normalizes that pattern, it teaches the organization that closure is more important than correction.

Compliance action steps

  • Require root cause analysis for repeat issues, not just incident closure.
  • Set escalation thresholds for “repeat with no root cause” findings.
  • Audit remediation quality, not only remediation completion.

Lesson 4: Partnerships fail when “shared accountability” is not operationalized

The report emphasizes that shared accountability in the commercial model was inconsistently understood and applied. It also notes that historical relationships and private conversations outside formal forums created perceptions of blurred boundaries, favoritism, and lack of objectivity, whether or not those perceptions were accurate. Compliance teams have seen this movie. Think distributors, joint ventures, outsourced compliance support, and major technology partners. If accountability is shared in theory but siloed in practice, something will fall through the cracks. Usually, it falls right into your lap when regulators arrive.

Compliance action steps

  • Define “shared accountability” in contracts, governance charters, and escalation protocols.
  • Ensure independence and objectivity are protected by design, not by personality.
  • Create joint forums where data is shared broadly, dissent is recorded, and decisions are made openly.

Lesson 5: Burnout is a risk factor, and meeting chaos is a governance failure

The report’s recommendations recognize the operational reality: high-pressure environments can degrade decision quality. It calls for “pulse checks,” rotation of high-pressure responsibilities, contingency staffing, and time protection for deep work to proactively address burnout and improve decision-making under mission conditions. Compliance professionals should take that to heart. Crisis cadence is sometimes unavoidable. Permanent crisis cadence is a leadership choice. And it carries predictable consequences: shortcuts, missed details, weakened documentation, and poor judgment.

Compliance action steps

  • Build surge staffing plans for investigations and incident response.
  • Rotate incident commander roles when events extend beyond days.
  • Protect time for analysis, not just meetings and status updates.

Lesson 6: Accountability must be visible, not performative

NASA does not bury the human dimension. The report contains leadership recommendations to speak openly with the joint team about leadership accountability, including concurrence with the report and reclassification as a mishap, and to hold a leadership-led stand-down day focused on reflection, accountability concerns, and rebuilding trust. For corporate leaders, this is where trust is won or lost after a crisis. Employees can tolerate a hard outcome. They struggle to tolerate spin. If your organization communicates externally with confidence but internally with vagueness, your culture learns the wrong lesson: optics first, truth second.

Compliance action steps

  • After a major incident, publish an internal accountability and remediation plan with owners and timelines.
  • Provide regular updates on what has been completed, what is delayed, and why.
  • Make it safe for the workforce to ask questions in interactive forums, as NASA recommends.

Lesson 7: Trust repair requires a plan, not a pep talk

One of the most useful artifacts in the report is a sample Organizational Trust Plan. It sets a goal to rebuild trust by establishing clear expectations, open accountability, and shared commitment to safety and mission success. It includes objectives around transparent communication, acknowledging past challenges, reinforcing shared values, and structured engagement. It then lays out action steps: leadership engagement, facilitated sessions, outward expressions of accountability, teamwide rollout, training and coaching, and communication through a written plan and regular updates.

That is exactly the kind of operational discipline compliance leaders should bring to culture work. Culture does not change because someone gives a speech. Culture changes when the organization changes how it makes decisions, treats dissent, and follows through.

Five key takeaways for the compliance professional

  1. Clarify decision rights before the crisis. Ambiguity becomes politics under pressure.
  2. Make transparency non-negotiable. Perceived filtering of data destroys credibility.
  3. Do not normalize unexplained anomalies. Repeat issues without a root cause are future failures.
  4. Operationalize shared accountability with partners. Otherwise, it is a slogan.
  5. Rebuild trust with a written plan and visible accountability. Trust repair is a managed process.

In the end, the Starliner lesson for compliance is simple: controls matter, but culture decides whether controls work when it counts. If leadership cannot run disagreements well, cannot share data broadly, and cannot demonstrate accountability after the fact, the best-written compliance program in the world will fail the moment the pressure rises.

Categories
Blog

Roman Philosophers and the Foundations of a Modern Compliance Program: Part 4 – Marcus Aurelius and Ethical Leadership

I recently wrote a series on the direct link between ancient Greek Philosophers and modern corporate compliance programs and compliance professionals. It was so much fun and so well-received that I decided to follow up with a similar series on notable Roman Philosophers. This week, we will continue our exploration of the philosophical underpinnings of modern corporate compliance programs and compliance professionals by looking at five philosophers from Rome, both from the BCE and AD eras.

We have considered Cicero and the duties, law, and moral limits of business; Seneca on power, pressure, and ethical decision-making under stress; and Varro on corporate governance. Today, we consider Marcus Aurelius and ethical leadership and tone at the top. Tomorrow, we will conclude with Lucretius to explore rationality, fear, and risk perception. Today, we continue with Marcus Aurelius, Ethical Leadership, and Culture as a Compliance Control

I. Marcus Aurelius in Context: Power with Restraint

Imagine you are the single most powerful person on earth. Are you going to be an unrepentant narcissist in the manner of Donald Trump, who believes he should govern on his own twisted morality based simply on ‘gut instinct’? Or are you going to take a different approach, set out your reasoned approach to governing in a book, and then govern with the moral authority of thousands of years of philosophy?

Marcus Aurelius is often remembered as the philosopher-king, but that description understates the difficulty of his position. He ruled the Roman Empire during a period of war, plague, economic strain, and political instability. Unlike many philosophers, Marcus Aurelius did not write for an audience. His Meditations were private reflections, written to discipline his own thinking while exercising absolute power.

This matters for compliance professionals. Marcus Aurelius did not theorize about ethical leadership from a distance. He lived inside it. He understood that power magnifies temptation, insulates leaders from feedback, and creates opportunities for self-deception. His philosophy is therefore preoccupied with restraint, humility, consistency, and responsibility.

Marcus repeatedly reminded himself that leadership is not a privilege but a burden. Authority did not entitle him to indulgence; it imposed higher expectations. He believed that leaders set moral boundaries through conduct long before they issue instructions. In modern terms, Marcus Aurelius understood that culture flows downward from leadership behavior rather than upward from policy documents.

II. The Compliance Problem Marcus Aurelius Illuminates: Culture Eats Controls

One of the central lessons of modern compliance enforcement is that formal controls cannot compensate for poor culture. Organizations with detailed policies and sophisticated monitoring still fail when leadership behavior signals that results matter more than integrity. The DOJ Evaluation of Corporate Compliance Programs (ECCP) explicitly asks whether senior leaders demonstrate commitment to compliance through actions, not words. Regulators assess whether ethical behavior is encouraged, whether misconduct is addressed consistently, and whether leaders tolerate or reward problematic conduct.

Marcus Aurelius would recognize this dynamic immediately. He believed that people learn how to behave by observing those in power. When leaders act inconsistently with stated values, cynicism follows. When leaders rationalize misconduct, that rationalization spreads. Compliance programs often falter when leadership treats ethics as a communication exercise rather than a lived expectation. Codes of conduct and training sessions cannot overcome the daily signals sent by executive decisions, incentive structures, and responses to failure.

Marcus teaches that culture is not accidental. It is created continuously by leadership choices, especially under pressure.

III. Modern Corporate Application: Marcus Aurelius, DOJ Expectations, and Leadership Accountability

Applying Marcus Aurelius to modern compliance reveals several concrete expectations that closely align with DOJ guidance.

First, leadership behavior must be consistent. Marcus believed hypocrisy was corrosive to authority. The DOJ similarly evaluates whether leaders follow the same rules they impose on others. Exceptions for senior executives undermine program credibility and weaken deterrence.

Second, leadership must respond to misconduct with moral clarity. Marcus wrote that anger and denial cloud judgment. In compliance terms, this means addressing issues promptly, transparently, and proportionately. Delayed or defensive responses signal tolerance, even when discipline eventually occurs.

Third, middle management matters. Marcus understood that culture is transmitted through layers of authority. DOJ guidance emphasizes the role of middle managers as culture carriers. Compliance programs should equip managers with the tools and incentives to reinforce ethical behavior, not merely deliver targets.

Fourth, incentives must reflect values. Marcus warned against leaders who chase reputation or reward at the expense of principle. Modern compliance programs must ensure compensation structures do not reward outcomes achieved through questionable means. The DOJ has repeatedly cited incentive misalignment as a root cause of misconduct.

Finally, leadership must create psychological safety. Marcus believed leaders should listen more than they speak. In compliance terms, this translates into openness to bad news, encouragement of dissent, and protection for those who raise concerns. A culture that punishes truth-telling cannot sustain compliance.

IV. Key Takeaways for Compliance Professionals

1. The Blueprint. Compliance professionals should view Marcus Aurelius and his writings as the blueprint for culture-based compliance. You can draw a direct line from the Meditations to both your compliance program and the leadership skills a CCO needs. Compliance should evaluate leadership behavior as a primary control, not a soft factor. This means not only reviewing employees who are promoted to management, but also a deep dive into their backgrounds. Also, thorough due diligence for any senior management hires from outside your organization.

2. Higher Standards. Compliance should hold senior leaders to higher standards of consistency and accountability.

3. Institutional Justice. Compliance should focus on how leaders respond to misconduct, not just how they prevent it. This is the CCO’s charge, and it must include an institutional fairness component in your compliance program.

  1. Compliance should ensure incentives reinforce ethical behavior at every level. The DOJ has consistently discussed the role of incentives in any compliance program, as far back as the 1st edition of the FCPA Guidance in 2012.
  2. Compliance should treat culture as an operational risk area subject to oversight and testing. Culture should be assessed, monitored, and improved. Simply because it is seen as a ‘soft’ part of an organization does not mean it should be treated differently.

4. Walk the Walk. Finally, Marcus Aurelius reminds us that ethical leadership is not performative. It is visible, daily, and decisive. In organizations, culture follows leadership long before it follows policy.

V. Conclusion

Marcus Aurelius brings the compliance lifecycle to its cultural apex. He shows that leadership behavior is not merely influential but determinative, shaping whether ethical expectations are taken seriously or quietly dismissed. Yet even the strongest ethical culture is not self-sustaining. Leaders are human, memory fades, and good intentions erode without reinforcement. This is where culture must be supported by systems that observe, test, and correct.

Marcus Aurelius teaches us how leaders should behave; Lucretius challenges us to examine how organizations think. If Marcus focuses on moral example, Lucretius turns our attention to rational observation, warning against fear, superstition, and self-deception. The transition from Marcus Aurelius to Lucretius mirrors the shift from cultural leadership to continuous improvement, from ethical intent to empirical verification. In compliance terms, it is the move from assuming the program works to proving that it does, using data, monitoring, and clear-eyed analysis rather than hope or habit.

Join us tomorrow for our concluding article on Lucretius and Rationality in Monitoring and Continuous Improvement. We will consider where culture gives way to systems, data, and the discipline of seeing risk clearly rather than through fear or superstition.

Categories
Blog

Greek Philosophers Week: Part 1 – Socrates and the Asking Questions

I have long wanted to trace the origins of the modern corporate compliance organization back to the ancient Greek philosophers, drawing lessons for compliance and ethics in 2026 and beyond. Today, I begin a five-part series where I do just that. In this series, we will consider Socrates, Plato, Aristotle, Pythagoras, and Euclid. We start with Socrates.

Socrates left no writings of his own. What he left was a method. He believed wisdom began with recognizing what one did not know and then relentlessly testing assumptions through disciplined questioning. That approach maps directly onto the daily work of the compliance professional. Risk assessments, investigations, root cause analysis, culture reviews, and even board reporting all rise or fall based on the quality of the questions asked.

Every effective compliance program begins with a question. Not a policy. Not a control. Not a dashboard. A question. That insight alone makes Socrates the right place to start any serious discussion about the influence of ancient Greek philosophy on modern corporate compliance and ethics programs.

The Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) does not use the word “Socratic,” but its expectations are unmistakably aligned with Socratic inquiry. Prosecutors repeatedly ask whether a company understands its risks, tests its assumptions, challenges its controls, and adapts when reality changes. A compliance program that does not ask hard questions is not mature. It is merely quiet. Indeed, Hui Chen, the author of the original ECCP, has said that a key purpose of the ECCP was to get compliance professionals to ‘ask questions’.

Ethical Inquiry as a Compliance Obligation

Socrates believed that unexamined beliefs were dangerous. He challenged Athenian leaders not because he enjoyed disruption, but because false confidence creates harm. In a corporate setting, the same risk exists when executives assume that a policy equals compliance or that training completion equals ethical behavior.

  1. Is the corporation’s compliance program well designed?
  2. Is the program being applied earnestly and in good faith? In other words, is the program adequately resourced and empowered to function effectively?
  3. Does the corporation’s compliance program work in practice?

These questions are fundamentally Socratic. It demands inquiry into how the business actually operates, where pressure points exist, and how misconduct could realistically occur. A compliance function that accepts management narratives at face value fails this test.

Daily compliance operations depend on this discipline. When reviewing third-party relationships, a Socratic compliance officer does not ask whether due diligence was performed. They ask whether it was sufficient, whether red flags were rationalized, and whether business incentives distorted judgment. That is inquiry, not administration.

Challenging Assumptions Without Becoming the Enemy

Socrates was executed because his questioning made powerful people uncomfortable. Compliance professionals face a less dramatic, but no less real, version of that tension. The role requires challenging assumptions, even when doing so slows deals, complicates reporting lines, or disrupts revenue projections.

The ECCP specifically evaluates whether a corporate compliance function has sufficient staff to audit, document, analyze, and utilize the results of the corporation’s compliance efforts. Prosecutors should also determine “whether the corporation’s employees are adequately informed about the compliance program and are convinced of the corporation’s commitment to it. Does the company’s culture of compliance, including awareness among employees that any criminal conduct, including the conduct underlying the investigation, will not be tolerated.”

Those structural questions exist because DOJ understands that inquiry without protection is performative. If compliance professionals cannot safely ask uncomfortable questions, the program is cosmetic.

In daily operations, this plays out in subtle ways. Does compliance have the authority to pause a transaction? Can investigators follow evidence wherever it leads? Are audit findings welcomed or explained away? A Socratic approach demands that compliance leaders test these realities rather than assume the answer.

The Socratic Method in Investigations and Root Cause Analysis

Socrates did not accept the first answer offered. He pushed deeper, often exposing contradictions or incomplete reasoning. That approach is directly applicable to investigations and root cause analysis. The ECCP places significant emphasis on whether companies understand why misconduct occurred and whether remediation addresses underlying causes. Too many investigations stop at identifying who violated a policy. Echoing Jonathan Marks, Socratic investigation asks why the violation made sense to the individual at the time. What pressures existed? What incentives misaligned behavior? What controls failed or were bypassed?

This type of inquiry requires patience and courage. It also involves trust from leadership. Findings may implicate management decisions, cultural signals, or compensation structures. Socrates reminds us that truth-seeking is rarely comfortable, but it is essential to ethical improvement.

Culture Is Revealed by the Questions You Allow

Socrates believed that a society’s health could be measured by its openness to questioning. The same is true for corporate culture. The questions employees feel safe asking reveal more than any values statement. The ECCP now explicitly asks companies to explain how they measure and address culture. The ECCP states, “Prosecutors should also assess how the company has leveraged its data to gain insights into the effectiveness of its compliance program and otherwise sought to promote an organizational culture that encourages ethical conduct and a commitment to compliance with the law.” Surveys, hotline data, and exit interviews are tools, but they are meaningless without inquiry. Key questions include: Are employees encouraged to speak up? Are concerns investigated thoroughly? Are outcomes communicated? Is retaliation punished?

In daily compliance practice, this means listening as much as enforcing. A Socratic compliance program does not treat employee concerns as noise to be managed. It treats them as data points to be explored. The quality of questions asked in response to a report often determines whether trust is strengthened or destroyed.

5 Key Takeaways for the Compliance Professional

1. Effective compliance begins with inquiry, not documentation.

A compliance program does not become effective simply because policies exist or training is completed. Effectiveness begins when compliance professionals consistently ask how misconduct could realistically occur within their organization. This requires challenging business assumptions, pressure points, and incentive structures. The ECCP repeatedly emphasizes the importance of understanding risk in context, which is impossible without disciplined questioning. A Socratic approach positions inquiry as an operational obligation, not an intellectual exercise, ensuring the program remains dynamic, responsive, and grounded in reality rather than formalism.

2. Risk assessments are living Socratic exercises, not static reports.

Too many organizations treat risk assessments as periodic documentation rather than ongoing inquiry. A Socratic risk assessment tests assumptions continuously as business models, geographies, and incentives evolve. Compliance professionals should revisit risk hypotheses, ask whether controls still function as intended, and challenge comfort-driven conclusions. Under the ECCP, regulators expect risk assessments to inform program design and resource allocation. Socratic inquiry ensures risk assessments remain relevant, credible, and capable of identifying emerging threats before they mature into enforcement issues.

3. Investigations must pursue understanding, not merely attribution.

Identifying who violated a policy is rarely sufficient to prevent recurrence. A Socratic investigation asks why the misconduct occurred, what pressures or incentives influenced behavior, and how organizational systems failed. This aligns directly with the ECCP’s focus on root cause analysis and remediation. When compliance professionals ask deeper questions, investigations become tools for program improvement rather than disciplinary endpoints. This approach strengthens controls, enhances credibility with regulators, and reduces the likelihood of repeat misconduct driven by unresolved systemic weaknesses.

4. Speak-up culture is defined by response quality, not hotline volume.

Organizations often measure speak-up culture by the number of reports received, but Socrates teaches that the real measure lies in how questions are received and addressed. Employees quickly learn whether raising concerns leads to thoughtful inquiry or defensive dismissal. The ECCP evaluates whether companies encourage reporting, protect against retaliation, and communicate outcomes appropriately. A Socratic compliance function listens carefully, asks clarifying questions, and treats concerns as signals worth examining. That discipline builds trust and reinforces ethical accountability across the organization.

5. Socratic questioning requires independence, authority, and protection.

Inquiry without authority is performative. Socrates paid the ultimate price for challenging power, but modern compliance professionals should not. The ECCP explicitly assesses whether compliance functions have sufficient independence, resources, and access to leadership. Without these safeguards, difficult questions go unasked or unanswered. A Socratic compliance program empowers professionals to challenge decisions, pause transactions, and escalate concerns without fear of retaliation. That structural support transforms ethical inquiry from individual courage into institutional practice.

From Socrates to Plato: From Inquiry to Structure

Socrates gives us the starting point. He teaches the compliance professional how to think, question, and resist complacency. But inquiry alone is not enough. Questions must eventually lead to structure, governance, and systems that translate insight into action.

That transition sets the stage for Plato. Where Socrates focuses on method, Plato focuses on design. The movement from Socrates to Plato mirrors the evolution of a compliance program itself, from asking whether risks exist to building governance structures capable of addressing them. In that sense, Socrates is the conscience of the compliance function. He reminds us that effectiveness begins with intellectual honesty and ethical curiosity. Without those traits, even the most sophisticated compliance architecture will rest on shaky ground.

Join us tomorrow for Part 2 and learn about Plato’s role in today’s compliance and ethics programs.