Categories
Blog

John Locke and the Legitimacy of Compliance Governance

We continue our exploration of Enlightenment Thinkers to see their influence on modern compliance programs. This week’s category is broader than philosophers, as many of these men excelled in numerous fields such as science, mathematics, calculus, and medicine. However, each contributed a key component that relates directly to our modern compliance regimes. In this post, we consider René Descartes and what he teaches as the next step beyond Bacon: evidence must be examined rigorously.

If Francis Bacon teaches us that compliance must be grounded in evidence, and René Descartes teaches us that evidence must be examined with rigor, John Locke brings us to the next great question: why should anyone trust the system itself? That question sits at the center of every modern compliance program. Employees are asked to report concerns, managers are expected to model ethical behavior, boards are charged with oversight, and companies routinely tell regulators that their compliance program is real, effective, and embedded in the business. But none of that works if the people inside the organization do not believe the system is fair, credible, and worthy of trust. That is why John Locke matters so much to the modern compliance professional.

Locke is often remembered as a philosopher of liberty, consent, rights, and accountable government. He argued that authority is legitimate only when it is exercised responsibly and for the benefit of those subject to it. Power, in Locke’s world, is not self-justifying. It must be bounded, accountable, and tied to obligations. That idea is highly relevant to corporate compliance. A compliance program is not legitimate simply because senior management approved it, or because the board receives quarterly updates, or because policies have been published on an intranet site. It is legitimate when employees experience it as fair, when reports are taken seriously, when retaliation is not tolerated, when discipline is consistent, and when leadership is seen to be accountable to the same standards as everyone else. That is not abstract philosophy. That is compliance governance.

Why Locke Matters to Compliance

Locke’s central insight is that authority derives its legitimacy from responsible exercise and reciprocal obligation. In a political context, that meant government existed to protect rights and serve the governed, not simply to command obedience. In the corporate context, the analogy is not exact, but the lesson is powerful. Employees will not trust a compliance program merely because it exists. They will trust it only if they believe it operates fairly, protects those who raise concerns, applies standards consistently, and treats power as accountable.

This is where Locke helps compliance professionals understand something many organizations still miss. Trust in a compliance system is not automatic. It has to be earned. An employee deciding whether to call a hotline is making a deeply practical judgment. Will anyone listen? Will the matter be reviewed fairly? Will the reporter be protected from retaliation? Will the senior executive who generated the concern be treated differently from everyone else? If the employee believes the answer to those questions is no, the reporting system has already failed, no matter how polished the company’s policy language may be.

The DOJ’s Compliance Expectations Are About Legitimacy

The Department of Justice does not use the language of social contract theory, but its Evaluation of Corporate Compliance Programs (ECCP) is filled with Locke’s concerns. The ECCP asks whether the program is well-designed, applied in good faith, and works in practice. It asks about tone at the top and tone in the middle. It asks whether reporting mechanisms are trusted, whether investigations are handled properly, whether discipline is applied consistently, and whether there is protection against retaliation. Those are all questions of legitimacy. A compliance program that employees do not trust cannot work in practice.

This point is critical because too many organizations still frame culture as something soft and secondary, a matter of messaging rather than system design. Locke would reject that categorically. In his framework, legitimacy is not a decoration added to authority. It is what makes authority durable and acceptable. In a company, that means culture and governance cannot be separated. Speak-up systems, fair treatment, board attention, transparent escalation, and consistent discipline are not peripheral to compliance. They are core structural elements of it.

Speak-Up Culture Is a Test of Governance

Few areas of compliance reveal Locke’s relevance more clearly than a speak-up culture. Every company says it wants employees to raise concerns. Every company says it prohibits retaliation. But the real issue is whether employees believe those statements are true in lived experience. That belief is shaped more by organizational behavior than by slogans.

If employees see complaints buried, if they watch high performers protected despite repeated concerns, if they hear that reporting a problem is career-limiting, or if they conclude that management is more interested in identifying the reporter than addressing the underlying issue, the company has lost legitimacy. In Lockean terms, authority has ceased to be trustworthy because it is no longer being exercised for the benefit of those subject to it.

This is why non-retaliation is so important. It is not simply an employment-law consideration or a human-resources aspiration. It is a governance imperative. Retaliation tells employees that the system serves power rather than principle. Once that lesson is absorbed, reporting declines, silent resignation grows, and risk moves underground. A company may still claim to have a hotline, but it no longer has a functioning speak-up culture.

Fairness Is Not Soft. It Is a Control.

Locke also helps us understand the role of fairness in a compliance program. In many organizations, fairness is discussed as a value. It should be discussed as a control. Why? Because fairness shapes behavior. When employees believe standards will be applied consistently, they are more likely to follow them, more likely to report deviations, and more likely to trust the company’s response when issues arise. When employees believe discipline is arbitrary, selective, or influenced by rank and revenue generation, the opposite occurs. Cynicism spreads quickly. Policies become performative. Reporting drops. Informal norms replace formal standards.

That is why the ECCP pays so much attention to disciplinary consistency. Regulators understand that a compliance program loses credibility when senior leaders are treated differently from line employees. Locke would have recognized the point immediately. In any system of authority, legitimacy is undermined when rules are used to bind the weak but not the powerful.

Board Oversight and Accountable Authority

Locke’s philosophy is equally useful when thinking about board oversight. He believed that those entrusted with authority must remain accountable for how they exercise it. That is a principle every board member should understand in the context of compliance.

Board oversight is not merely about receiving information. It is about ensuring that authority inside the company is properly bounded, monitored, and answerable. The board does not run day-to-day compliance, but it is responsible for ensuring that management has created a system worthy of trust. That means asking whether reporting channels work, whether investigations are independent, whether non-retaliation protections are real, whether major risks are escalated, and whether compliance has stature and access.

This is particularly important because boards sometimes fall into the trap of treating compliance as a downstream operational matter. Locke would have viewed that as a category mistake. Governance is not something separate from legitimacy. Governance is how legitimacy is maintained.

For the modern board, that means compliance oversight must be substantive. Directors should ask not only for dashboards, but for explanations. How does management know employees trust reporting channels? What evidence supports claims of a strong culture? How is middle management assessed? What happens when senior leaders are implicated? What trends in reporting, substantiation, retaliation, and discipline should concern the board? Those questions move oversight from ceremonial to real.

In that sense, Locke also speaks directly to Caremark-era expectations. Directors have obligations not simply to exist, but to oversee. A board that does not ensure the company has credible systems of information and response is not exercising accountable authority. It is abdicating it.

Culture and the Middle Management Problem

No discussion of compliance legitimacy would be complete without examining middle management. The DOJ, in both the ECCP and the FCPA Resource Guide, 2nd edition, has long emphasized that “tone at the top” is not enough. Tone in the middle matters enormously, because employees experience the company most directly through their immediate supervisors.

This is another place where Locke offers real insight. In any system of authority, legitimacy rises or falls through those who exercise power closest to the governed. If middle managers pressure employees to ignore controls, discourage escalation, roll their eyes at compliance training, or quietly punish bad news, the company’s formal commitments will collapse in practice.

This is why companies must treat middle management behavior as a governance issue. Are managers trained not just on rules, but on their duty to support reporting and ethical decision-making? Are they evaluated on how they build culture? Do promotion and bonus structures reinforce ethical leadership, or only financial performance? Are there consequences when managers create pressure that undermines compliance expectations?

These are not marginal considerations. They are central to whether the compliance program is experienced as legitimate in daily operations. Locke reminds us that people judge institutions less by official declarations than by how authority is exercised.

The Compliance Officer as Steward of Institutional Legitimacy

Locke casts the compliance officer as a steward of institutional legitimacy. That is an important and underappreciated role. The compliance officer helps the company earn trust, not through public relations, but through structure, fairness, and accountability. The compliance officer helps ensure that when people speak up, they are heard; when misconduct occurs, it is handled consistently; when leaders exercise authority, they do so under standards that bind them as well. In this sense, compliance is not just about preventing legal violations. It is about making the institution worthy of confidence.

That is why legitimacy matters so much. A company with high trust in its compliance system detects issues earlier, responds more effectively, learns more quickly, and sustains a stronger ethical culture over time. A company without that trust becomes opaque to itself. Risk goes silent. Problems surface late. Governance becomes reactive. The institution loses one of its most important defenses: its own people’s willingness to tell it the truth.

Five Lessons Learned for the Modern Compliance Professional

First, a compliance program must be legitimate to be effective. Employees must believe the system is fair, credible, and trustworthy.

Second, speak-up culture is a governance test. Reporting mechanisms only work when employees believe concerns will be taken seriously and retaliation will not follow.

Third, fairness is a control. Consistent discipline, equal treatment across levels of seniority, and transparent standards strengthen compliance credibility.

Fourth, boards must exercise accountable oversight. They should test management’s claims about culture, reporting, and non-retaliation with real evidence.

Fifth, middle management is where legitimacy lives or dies. A company must align manager incentives, expectations, and accountability with its compliance values.

Coming Next: Thomas Hobbes and Why Every Compliance Program Needs Order

If John Locke teaches us that compliance governance must be legitimate, Thomas Hobbes will remind us that legitimacy alone is not enough. A company also needs structure, clear rules, assigned authority, escalation pathways, and credible enforcement. In Part 4, I will explore how Hobbes helps explain the roles of policies, procedures, internal controls, and operational discipline in a best-practices compliance program. Trust matters, but so does order.

Categories
Great Women in Compliance

Great Women in Compliance: Culture Check: Are Your Speak Up Channels Effective?

Ever wish you could benchmark your Speak Up channels against more than just volume, issue types, and time to close? 

The Speak Up Self-Assessment (SUSA) was designed to help you go deeper by assessing organizational infrastructure—including reporting channels, confidentiality safeguards, follow-up processes, and governance of whistleblowing systems.

In this roundtable episode, we speak with guests: 

  • Professor Jessica McManus Warnell
  • Dr. Mary Gentile 
  • Allison Narmi 

about the work they are doing to bring a free, anonymous diagnostic tool to self-assess speak-up channels. Building on the work done in the EU, our guests today are members of the project team that has developed an American version of the tool, with support from the Notre Dame Deloitte Center for Ethical Leadership. Link to the EU version here – https://edhec.az1.qualtrics.com/jfe/form/SV_eleMjkHraHzw6Hk

U.S. version coming soon.  

Categories
Daily Compliance News

Daily Compliance News: April 16, 2026, The Bribery is Legal in Illinois Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Blog

Culture, Speak-Up, and Human Judgment: The Human Side of AI Governance

Artificial intelligence may be built on data, models, and code, but governance ultimately rests on people. For boards and Chief Compliance Officers, one of the most important questions is not only whether the organization has responsibly approved AI tools, but also whether employees are prepared to challenge them, report concerns, and apply human judgment when something does not look right. In many organizations, the earliest warning system for AI failure is not a dashboard. It is the workforce.

Over the course of this series, I have explored four critical governance challenges in AI: board oversight and accountability, strategy outrunning governance, data governance and privacy, and ongoing monitoring. This final blog post turns to the fifth and most underappreciated challenge of all: culture, speak-up, and human judgment.

Underappreciated because organizations often begin AI governance with structure in mind. They build committees, draft policies, classify risks, and establish approval gates. All of that is necessary. But structure alone is not sufficient. If the human beings closest to the work do not understand their role in AI governance, do not feel empowered to raise concerns, or begin to defer too readily to machine-generated outputs. The governance framework will be weaker than it appears on paper.

This is the point many companies miss. AI governance is not only about the technology. It is about whether the organization’s culture supports the responsible use of technology.

Employees Will See AI Failures First

In many companies, the first person to notice an AI problem will not be a board member, a Chief Executive Officer, or even a member of the governance committee. It will be an employee interacting with the tool in daily operations. It may be the customer service representative who sees the system generating inaccurate responses. It may be the HR professional who notices troubling patterns from an AI-supported screening tool. It may be the sales employee who sees a generative tool overstating product claims. It may be the finance professional who questions an automated summary that does not match underlying records. It may be the compliance analyst who sees a tool being used for an unapproved purpose.

That matters because early visibility is one of the most valuable protections a company can have. But visibility only becomes a control if employees know what to do with what they see. That is why culture is a governance issue. A workforce may spot the problem, but if employees do not understand that AI-related concerns are reportable, are unsure where to raise them, or believe management will ignore them, the warning system fails.

For boards and CCOs, that means AI governance cannot stop at policy creation. It must extend into behavior, reporting norms, and organizational trust.

Speak-Up Culture Is an AI Governance Control

Compliance professionals have long known that a speak-up culture is a control. It is often the first way a company learns of misconduct, process breakdowns, weak supervision, retaliation, harassment, fraud, or control evasion. The same principle now applies with equal force to AI.

Employees may observe biased outputs, inaccurate recommendations, privacy concerns, unexplained model behavior, misuse of tools, inappropriate reliance on machine-generated content, or efforts to bypass required human review. If they do not report those concerns, management may have no timely way to know what is happening.

This is where the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) remains highly instructive. The ECCP places substantial emphasis on whether employees are comfortable raising concerns, whether the company investigates them appropriately, and whether retaliation is prohibited in practice. Those same questions should now be asked in the context of AI. Does the company’s reporting framework explicitly include AI-related concerns? Are managers trained to recognize and escalate those concerns? Are reports investigated with the same seriousness as other compliance issues? Are employees protected if they raise uncomfortable questions about a tool the business wants to use?

If the answer is no, the company may have AI procedures, but does not yet have embedded AI governance in its culture.

Human Judgment Cannot Be Optional

One of the most significant risks in AI governance is not simply that a model will be wrong. It is that people will stop questioning it. AI systems can produce outputs quickly, fluently, and with apparent confidence. That creates a powerful temptation for users to over-trust the tool. When a system sounds polished, appears efficient, and reduces workload, people may assume that its conclusions deserve deference. This is precisely where governance needs the corrective force of human judgment.

Human judgment cannot be treated as a ceremonial step or a paper requirement. It must be meaningful. That means the people reviewing AI outputs must have the authority, time, training, and confidence to challenge those outputs when needed. A human review requirement that exists only on paper is not much of a safeguard. If reviewers are overloaded, insufficiently trained, or culturally discouraged from slowing the process, the control may be largely illusory.

Boards should care about this because one of the easiest mistakes management can make is to describe human oversight in governance documents without testing whether it is functioning in practice. CCOs should care because this is a classic compliance problem. A control may be designed elegantly but fail in daily operations because the supporting culture is too weak to sustain it.

Training Must Change with AI

A company cannot expect good judgment around AI if it has not trained people on what good judgment looks like. That means AI training should go beyond technical usage instructions. Employees need to understand what risks may arise, what concerns are reportable, what approved use looks like, what prohibited use looks like, and why human challenge matters. Managers need additional training because they are often the first informal escalation point when an employee raises a concern. If managers dismiss AI concerns as overreactions, inconveniences, or resistance to innovation, the speak-up system will quickly lose credibility.

Training should also be role-based. The risks faced by a customer-facing team may differ from those faced by teams in HR, legal, procurement, marketing, finance, or internal audit. A generic AI training module may create awareness, but it will not create the operational judgment needed in high-risk areas.

This is where the NIST AI Risk Management Framework provides practical value. NIST’s emphasis on governance is not limited to formal structures. It contemplates culture, accountability, and the need for organizations to support informed decision-making across the enterprise. ISO/IEC 42001 similarly reinforces the importance of organizational competence, awareness, and defined responsibilities. Both frameworks point to a critical truth: responsible AI use depends not only on controls over the technology, but also on the capabilities of the people who use and oversee it.

Managers Matter More Than Companies Often Realize

If culture is the operating environment of governance, managers are often its most important local translators. An employee may not begin by filing a formal report. More often, an employee may raise a concern informally with a supervisor or colleague. “This output does not seem right.” “I do not think we should be using it this way.” “This seems to be pulling in sensitive information.” “This recommendation may be biased.” “The human review is not really happening anymore.”

The manager’s response in that moment matters enormously. Does the manager take the concern seriously? Does the manager know it should be escalated? Does the manager see it as a governance issue or as resistance to efficiency? Does the manager understand the difference between a minor usability complaint and a potentially significant compliance concern?

This is why boards and CCOs should not think about speak-up solely in hotline terms. AI governance depends on the broader management culture. If supervisors are not equipped to receive and escalate AI concerns appropriately, many issues will die in the middle of the organization before they ever reach a formal channel.

Anti-Retaliation Must Be Real in the AI Context

There is another dimension that cannot be overlooked: the risk of retaliation. In some organizations, employees may hesitate to raise AI concerns because they fear being labeled anti-innovation, obstructionist, or not commercially minded. That creates a subtle but serious governance risk. If the corporate atmosphere celebrates rapid AI adoption without equally celebrating responsible challenge, then employees may conclude that silence is safer than candor.

This is why anti-retaliation messaging must be explicit in the AI context. The company should make clear that raising concerns about inaccurate outputs, misuse, privacy risks, unfairness, or control breakdowns is part of responsible business conduct. It is not a failure to embrace innovation. It is a contribution to the effective governance of innovation.

The CCO should ensure that AI-related concerns are incorporated into existing anti-retaliation frameworks, investigations protocols, and communications. Boards should ask whether employee sentiment data, hotline trends, and internal investigations provide any signal that people are reluctant to question AI initiatives. If the organization is moving aggressively on AI, it should be equally serious about protecting those who raise governance concerns about it.

Documentation and Escalation Still Matter

As with every other aspect of AI governance, culture and judgment must be integrated into the process. A company should document how AI-related concerns can be reported, how they are triaged, who reviews them, what escalation triggers apply, and how resolutions are tracked. Concerns about AI should not be dismissed as vague general complaints. They should be reviewable and analyzable over time.

This is essential not only for accountability but for learning. Patterns in employee concerns may reveal weaknesses in training, design, vendor management, access controls, or oversight. A single report may be an isolated event. Repeated concerns within a single function may point to a systemic governance problem. That is why speak-up is not just about receiving reports. It is about turning those reports into organizational intelligence.

The ECCP again offers a useful framework. It asks whether investigations are timely, whether root causes are examined, and whether lessons learned are fed back into the compliance program. AI governance should work the same way. A reported concern should not end with a narrow answer to the immediate complaint. It should prompt management to ask what the issue reveals about the broader governance environment.

Boards Must Model the Right Tone

This final point may be the most important. Culture is shaped by what leadership rewards, tolerates, and asks about. If the board only asks about AI efficiency, adoption, and speed, management will take the signal. If the board asks whether employees are raising concerns, whether human oversight is meaningful, whether managers are trained, and whether retaliation protections are working, management will take that signal as well.

For CCOs, this is a vital opportunity. The compliance function can help boards understand that governance is not only about structure and controls, but also about whether the organization has preserved the human capacity to question, escalate, and correct. In the AI context, that may be the most important governance capability of all.

Because in the end, even the most advanced system will not govern itself. An enterprise must govern it. That requires culture. It requires trust. It requires the courage to speak up. And it requires strong human judgment to look at an impressive output and still ask, “Is this right?”

The Human Side of Governance Is the Decisive Side

This final article brings the series back to a simple truth. AI governance is not only about what the company builds. It is about how the company behaves.

Boards may establish oversight. Management may create structures. Compliance may build controls. But if employees are not prepared to report concerns or exercise judgment, the organization will remain vulnerable. A strong AI governance program does not merely control the system. It empowers the people around the system to challenge it responsibly.

That is the human side of governance, and in many ways it is the decisive side. 

Categories
Daily Compliance News

Daily Compliance News: March 31, 2026, The Why Did She Leave Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • HR leaders say we are misusing the term ‘Agentic AI.’ (WSJ)
  • Senator wants to know why the SEC Director of Enforcement left. (Reuters)
  • UK fines Apple sub for breaching Russia sanctions. (FT)
  • Better get your culture right. (NYT)
Categories
Great Women in Compliance

Great Women in Compliance: Reflections on Investigations, Culture and the Future

In this episode of Great Women in Compliance, Lisa Fine speaks with Becky Rohr, Chief Compliance Officer and Head of Investigations at Ericsson. Becky talks about how her career journey led her to join Ericsson during a monitorship to strengthen their investigations function.

To do that, she focused on conducting fair, thorough, and efficient investigations, enhancing investigator training, and improving processes for collecting and reviewing digital evidence within a global organization. This led to her being named Chief Compliance Officer at Ericsson and to the benefits of integrating investigations and compliance.  Not only did this lead to the continued evolution of their compliance function, but it also connected hotline reports, investigations, and remediation by using creative approaches to reinforcing ethics at Ericcson.

Lisa and Becky also discuss how the Ericcson team has addressed workplace misconduct globally, sustaining compliance improvements after a monitorship ends, and the importance of leadership communication in maintaining a strong ethical culture.

The conversation also touches on culture change, addressing workplace misconduct globally, and how organizations can sustain strong compliance programs even after regulatory oversight ends.

Finally, Becky reflects on her decision to leave Ericsson and take a “power of the pause” moment before deciding on her next chapter—an approach that highlights the value of reflection and intentional career choices.

Categories
Innovation in Compliance

Innovation in Compliance – Healthcare Compliance: Fraud, Waste & Abuse, Culture, and Data-Driven Risk Management with Evan Sampson

Innovation occurs across many areas, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode,  host Tom Fox welcomes Evan Sampson, a noted health care compliance attorney.

Sampson traces his path from commercial litigation to representing healthcare practices on HIPAA/privacy and reimbursement matters, then moving in-house at a network of plastic surgery centers, where he managed compliance focused on fraud, waste, and abuse, and on evolving out-of-network billing rules leading into the No Surprises Act. Sampson explains how compliance programs can create business value beyond risk mitigation by uncovering inefficiencies and opportunities, such as identifying downcoding in medical billing and using complaint investigations to spot growth areas. He describes how his litigation background helps him anticipate how issues will unfold over time in investigations and litigation, thereby improving his credibility with business leaders. They discuss building a culture of compliance in fast-growing healthcare organizations, tracking regulatory changes across primary and secondary sources, and leveraging AI and data analytics to detect claim outliers and strengthen compliance.

Key highlights:

  • Healthcare Compliance Shift
  • Fraud, Waste, and Abuse
  • Compliance Creates Value
  • Building Compliance Culture
  • Tracking Regulatory Changes
  • AI in Compliance Analytics

Resources:

Evan Sampson on LinkedIn

Post & Schell

Innovation in Compliance was recently honored as the Number 4 podcast in Risk Management by 1,000,000 Podcasts.

Categories
All Things Investigations

ATI Podcast: Inhouse Insights – Building and Benefiting from a Culture of Compliance

Welcome to the inaugural episode of the newly rebranded ATI Podcast: Inhouse Insights—formerly known as All Things Investigations.

Presented by the Hughes Hubbard & Reed LLP Anti-Corruption & Internal Investigations Practice Group, this premiere episode sets the tone for a bold new chapter—bringing practical, in-house perspectives to today’s most pressing compliance challenges.

Host Michael DeBernardis welcomes Darryl Cyphers Jr., Senior Director of Legal Compliance at Klaviyo, for a candid and forward-looking conversation on how organizations can build—and sustain—a culture of compliance that actually works.

Together, they explore how compliance leaders can move beyond policies on paper to create real organizational impact—through measurable culture metrics, smarter use of AI to drive policy engagement, authentic tone at the top, and meaningful collaboration with HR and business partners. Darryl also shares practical guidance for navigating compliance gray areas and strengthening trust through continuous employee engagement and feedback.

Highlights include:

  • Defining a modern culture of compliance
  • Metrics and tools for measuring cultural effectiveness
  • Employee engagement and feedback that drive results
  • Building partnerships across HR and business teams
  • Innovative and engaging compliance training approaches
  • Navigating gray areas with confidence and credibility

Resources:

Hughes Hubbard & Reed Website

Klaviyo

Darryl Cyphers Jr. on LinkedIn

Categories
Blog

The Starliner, Culture and Compliance: Leadership Lessons from a NASA Investigation Report

Corporate compliance professionals spend a lot of time talking about controls, training, third parties, and investigations. Yet the hard truth is that the most important control environment sits above all of that: leadership behavior and the culture it creates. That is why this NASA investigation report on the Boeing CST-100 Starliner Crewed Flight Test (CFT) is such a useful case study. It is a technical report, to be sure. But it is also a cultural, leadership, and governance report. NASA’s bottom line is unambiguous: technical excellence and safety require transparent communication and clear roles and responsibilities, not as slogans, but as operating requirements that must be institutionalized so safety is never compromised in pursuit of schedule or cost.

If you are a Chief Compliance Officer, General Counsel, or business leader, you should read this report the way you read an enforcement action. Not to gawk. Not to assign blame. But to harvest lessons for your own organization before you have your own high-visibility close call.

The incident(s) that led to the report

The CFT mission launched June 5, 2024, as a pivotal step toward certifying Starliner to transport astronauts to the International Space Station. It was planned as an 8-to-14-day mission but was extended to 93 days after significant propulsion system anomalies emerged. Ultimately, the Starliner capsule returned uncrewed, while astronauts Barry “Butch” Wilmore and Sunita “Suni” Williams returned aboard SpaceX’s Crew-9 Dragon in March 2025. In February 2025, NASA chartered a Program Investigation Team (PIT) to examine the technical, organizational, and cultural factors contributing to the anomalies.

The report describes four major hardware anomaly areas, including Service Module RCS thruster fail-offs that temporarily caused a loss of 6 Degrees of Freedom control during ISS rendezvous and required in-situ troubleshooting to recover enough capability to dock, a Crew Module thruster failure during descent that reduced fault tolerance, and helium manifold leaks where seven of eight Service Module helium manifolds leaked during the mission. The PIT further determined that the 6DOF loss during rendezvous met criteria for a Type A mishap (or at least a high-visibility close call), underscoring how close the program came to a very different ending.

That is the “what.” For compliance professionals, the “so what” is that NASA did not treat this as a purely engineering problem. It treated it as an integrated system failure, in which culture and leadership either reduce risk or magnify it.

Lesson 1: Decision authority is culture, not paperwork

One of the report’s clearest threads is that fragmented roles and responsibilities delayed decision-making and eroded confidence. In the compliance world, unclear decision rights become the breeding ground for “informal governance”: private conversations, end-runs around committees, and decisions that are never fully documented. Over time, that becomes a shadow-control environment that your policies cannot touch.

Compliance action steps

  • Define decision rights for the riskiest calls (high-risk third parties, market entry, major remediation, critical incidents).
  • Require a short, written record of: facts reviewed, options considered, dissent captured, decision made, and owner accountable.
  • Separate “recommendation authority” from “approval authority” so everyone knows where they sit.

Lesson 2: Transparency is a control, and selective data sharing destroys trust

The report explicitly flags that the lack of data access fueled concerns about selective information sharing. Interviewees described frustration that information could be filtered, selectively chosen, or sanitized, which eroded confidence in the process and people. It also notes reports of questions being labeled “too detailed” or “out of scope” without mechanisms to ensure concerns were addressed. That is the compliance danger zone. When teams believe the narrative matters more than the data, they stop escalating early. They start documenting defensively. They seek safety in silence.

Compliance action steps

  • Build “open data” expectations into your incident response and investigative protocols.
  • Create a defined pathway for technical or subject-matter dissent to be logged, reviewed, and dispositioned.
  • Treat meeting notes and decisions as governed records, not optional artifacts.

Lesson 3: Risk acceptance without rigor becomes “unexplained anomaly tolerance”

NASA calls out “anomaly resolution discipline” and warns that repeated acceptance of unexplained anomalies without root cause can lead to recurrence. That single lesson belongs on a poster in every compliance office. In corporate terms, “unexplained anomalies” are recurring control exceptions, repeat hotline themes, repeated third-party red flags, and audit findings that are “managed” rather than fixed. If leadership normalizes that pattern, it teaches the organization that closure is more important than correction.

Compliance action steps

  • Require root cause analysis for repeat issues, not just incident closure.
  • Set escalation thresholds for “repeat with no root cause” findings.
  • Audit remediation quality, not only remediation completion.

Lesson 4: Partnerships fail when “shared accountability” is not operationalized

The report emphasizes that shared accountability in the commercial model was inconsistently understood and applied. It also notes that historical relationships and private conversations outside formal forums created perceptions of blurred boundaries, favoritism, and lack of objectivity, whether or not those perceptions were accurate. Compliance teams have seen this movie. Think distributors, joint ventures, outsourced compliance support, and major technology partners. If accountability is shared in theory but siloed in practice, something will fall through the cracks. Usually, it falls right into your lap when regulators arrive.

Compliance action steps

  • Define “shared accountability” in contracts, governance charters, and escalation protocols.
  • Ensure independence and objectivity are protected by design, not by personality.
  • Create joint forums where data is shared broadly, dissent is recorded, and decisions are made openly.

Lesson 5: Burnout is a risk factor, and meeting chaos is a governance failure

The report’s recommendations recognize the operational reality: high-pressure environments can degrade decision quality. It calls for “pulse checks,” rotation of high-pressure responsibilities, contingency staffing, and time protection for deep work to proactively address burnout and improve decision-making under mission conditions. Compliance professionals should take that to heart. Crisis cadence is sometimes unavoidable. Permanent crisis cadence is a leadership choice. And it carries predictable consequences: shortcuts, missed details, weakened documentation, and poor judgment.

Compliance action steps

  • Build surge staffing plans for investigations and incident response.
  • Rotate incident commander roles when events extend beyond days.
  • Protect time for analysis, not just meetings and status updates.

Lesson 6: Accountability must be visible, not performative

NASA does not bury the human dimension. The report contains leadership recommendations to speak openly with the joint team about leadership accountability, including concurrence with the report and reclassification as a mishap, and to hold a leadership-led stand-down day focused on reflection, accountability concerns, and rebuilding trust. For corporate leaders, this is where trust is won or lost after a crisis. Employees can tolerate a hard outcome. They struggle to tolerate spin. If your organization communicates externally with confidence but internally with vagueness, your culture learns the wrong lesson: optics first, truth second.

Compliance action steps

  • After a major incident, publish an internal accountability and remediation plan with owners and timelines.
  • Provide regular updates on what has been completed, what is delayed, and why.
  • Make it safe for the workforce to ask questions in interactive forums, as NASA recommends.

Lesson 7: Trust repair requires a plan, not a pep talk

One of the most useful artifacts in the report is a sample Organizational Trust Plan. It sets a goal to rebuild trust by establishing clear expectations, open accountability, and shared commitment to safety and mission success. It includes objectives around transparent communication, acknowledging past challenges, reinforcing shared values, and structured engagement. It then lays out action steps: leadership engagement, facilitated sessions, outward expressions of accountability, teamwide rollout, training and coaching, and communication through a written plan and regular updates.

That is exactly the kind of operational discipline compliance leaders should bring to culture work. Culture does not change because someone gives a speech. Culture changes when the organization changes how it makes decisions, treats dissent, and follows through.

Five key takeaways for the compliance professional

  1. Clarify decision rights before the crisis. Ambiguity becomes politics under pressure.
  2. Make transparency non-negotiable. Perceived filtering of data destroys credibility.
  3. Do not normalize unexplained anomalies. Repeat issues without a root cause are future failures.
  4. Operationalize shared accountability with partners. Otherwise, it is a slogan.
  5. Rebuild trust with a written plan and visible accountability. Trust repair is a managed process.

In the end, the Starliner lesson for compliance is simple: controls matter, but culture decides whether controls work when it counts. If leadership cannot run disagreements well, cannot share data broadly, and cannot demonstrate accountability after the fact, the best-written compliance program in the world will fail the moment the pressure rises.

Categories
Blog

Roman Philosophers and the Foundations of a Modern Compliance Program: Part 4 – Marcus Aurelius and Ethical Leadership

I recently wrote a series on the direct link between ancient Greek Philosophers and modern corporate compliance programs and compliance professionals. It was so much fun and so well-received that I decided to follow up with a similar series on notable Roman Philosophers. This week, we will continue our exploration of the philosophical underpinnings of modern corporate compliance programs and compliance professionals by looking at five philosophers from Rome, both from the BCE and AD eras.

We have considered Cicero and the duties, law, and moral limits of business; Seneca on power, pressure, and ethical decision-making under stress; and Varro on corporate governance. Today, we consider Marcus Aurelius and ethical leadership and tone at the top. Tomorrow, we will conclude with Lucretius to explore rationality, fear, and risk perception. Today, we continue with Marcus Aurelius, Ethical Leadership, and Culture as a Compliance Control

I. Marcus Aurelius in Context: Power with Restraint

Imagine you are the single most powerful person on earth. Are you going to be an unrepentant narcissist in the manner of Donald Trump, who believes he should govern on his own twisted morality based simply on ‘gut instinct’? Or are you going to take a different approach, set out your reasoned approach to governing in a book, and then govern with the moral authority of thousands of years of philosophy?

Marcus Aurelius is often remembered as the philosopher-king, but that description understates the difficulty of his position. He ruled the Roman Empire during a period of war, plague, economic strain, and political instability. Unlike many philosophers, Marcus Aurelius did not write for an audience. His Meditations were private reflections, written to discipline his own thinking while exercising absolute power.

This matters for compliance professionals. Marcus Aurelius did not theorize about ethical leadership from a distance. He lived inside it. He understood that power magnifies temptation, insulates leaders from feedback, and creates opportunities for self-deception. His philosophy is therefore preoccupied with restraint, humility, consistency, and responsibility.

Marcus repeatedly reminded himself that leadership is not a privilege but a burden. Authority did not entitle him to indulgence; it imposed higher expectations. He believed that leaders set moral boundaries through conduct long before they issue instructions. In modern terms, Marcus Aurelius understood that culture flows downward from leadership behavior rather than upward from policy documents.

II. The Compliance Problem Marcus Aurelius Illuminates: Culture Eats Controls

One of the central lessons of modern compliance enforcement is that formal controls cannot compensate for poor culture. Organizations with detailed policies and sophisticated monitoring still fail when leadership behavior signals that results matter more than integrity. The DOJ Evaluation of Corporate Compliance Programs (ECCP) explicitly asks whether senior leaders demonstrate commitment to compliance through actions, not words. Regulators assess whether ethical behavior is encouraged, whether misconduct is addressed consistently, and whether leaders tolerate or reward problematic conduct.

Marcus Aurelius would recognize this dynamic immediately. He believed that people learn how to behave by observing those in power. When leaders act inconsistently with stated values, cynicism follows. When leaders rationalize misconduct, that rationalization spreads. Compliance programs often falter when leadership treats ethics as a communication exercise rather than a lived expectation. Codes of conduct and training sessions cannot overcome the daily signals sent by executive decisions, incentive structures, and responses to failure.

Marcus teaches that culture is not accidental. It is created continuously by leadership choices, especially under pressure.

III. Modern Corporate Application: Marcus Aurelius, DOJ Expectations, and Leadership Accountability

Applying Marcus Aurelius to modern compliance reveals several concrete expectations that closely align with DOJ guidance.

First, leadership behavior must be consistent. Marcus believed hypocrisy was corrosive to authority. The DOJ similarly evaluates whether leaders follow the same rules they impose on others. Exceptions for senior executives undermine program credibility and weaken deterrence.

Second, leadership must respond to misconduct with moral clarity. Marcus wrote that anger and denial cloud judgment. In compliance terms, this means addressing issues promptly, transparently, and proportionately. Delayed or defensive responses signal tolerance, even when discipline eventually occurs.

Third, middle management matters. Marcus understood that culture is transmitted through layers of authority. DOJ guidance emphasizes the role of middle managers as culture carriers. Compliance programs should equip managers with the tools and incentives to reinforce ethical behavior, not merely deliver targets.

Fourth, incentives must reflect values. Marcus warned against leaders who chase reputation or reward at the expense of principle. Modern compliance programs must ensure compensation structures do not reward outcomes achieved through questionable means. The DOJ has repeatedly cited incentive misalignment as a root cause of misconduct.

Finally, leadership must create psychological safety. Marcus believed leaders should listen more than they speak. In compliance terms, this translates into openness to bad news, encouragement of dissent, and protection for those who raise concerns. A culture that punishes truth-telling cannot sustain compliance.

IV. Key Takeaways for Compliance Professionals

1. The Blueprint. Compliance professionals should view Marcus Aurelius and his writings as the blueprint for culture-based compliance. You can draw a direct line from the Meditations to both your compliance program and the leadership skills a CCO needs. Compliance should evaluate leadership behavior as a primary control, not a soft factor. This means not only reviewing employees who are promoted to management, but also a deep dive into their backgrounds. Also, thorough due diligence for any senior management hires from outside your organization.

2. Higher Standards. Compliance should hold senior leaders to higher standards of consistency and accountability.

3. Institutional Justice. Compliance should focus on how leaders respond to misconduct, not just how they prevent it. This is the CCO’s charge, and it must include an institutional fairness component in your compliance program.

  1. Compliance should ensure incentives reinforce ethical behavior at every level. The DOJ has consistently discussed the role of incentives in any compliance program, as far back as the 1st edition of the FCPA Guidance in 2012.
  2. Compliance should treat culture as an operational risk area subject to oversight and testing. Culture should be assessed, monitored, and improved. Simply because it is seen as a ‘soft’ part of an organization does not mean it should be treated differently.

4. Walk the Walk. Finally, Marcus Aurelius reminds us that ethical leadership is not performative. It is visible, daily, and decisive. In organizations, culture follows leadership long before it follows policy.

V. Conclusion

Marcus Aurelius brings the compliance lifecycle to its cultural apex. He shows that leadership behavior is not merely influential but determinative, shaping whether ethical expectations are taken seriously or quietly dismissed. Yet even the strongest ethical culture is not self-sustaining. Leaders are human, memory fades, and good intentions erode without reinforcement. This is where culture must be supported by systems that observe, test, and correct.

Marcus Aurelius teaches us how leaders should behave; Lucretius challenges us to examine how organizations think. If Marcus focuses on moral example, Lucretius turns our attention to rational observation, warning against fear, superstition, and self-deception. The transition from Marcus Aurelius to Lucretius mirrors the shift from cultural leadership to continuous improvement, from ethical intent to empirical verification. In compliance terms, it is the move from assuming the program works to proving that it does, using data, monitoring, and clear-eyed analysis rather than hope or habit.

Join us tomorrow for our concluding article on Lucretius and Rationality in Monitoring and Continuous Improvement. We will consider where culture gives way to systems, data, and the discipline of seeing risk clearly rather than through fear or superstition.