Corporate compliance professionals spend a lot of time talking about controls, training, third parties, and investigations. Yet the hard truth is that the most important control environment sits above all of that: leadership behavior and the culture it creates. That is why this NASA investigation report on the Boeing CST-100 Starliner Crewed Flight Test (CFT) is such a useful case study. It is a technical report, to be sure. But it is also a cultural, leadership, and governance report. NASA’s bottom line is unambiguous: technical excellence and safety require transparent communication and clear roles and responsibilities, not as slogans, but as operating requirements that must be institutionalized so safety is never compromised in pursuit of schedule or cost.
If you are a Chief Compliance Officer, General Counsel, or business leader, you should read this report the way you read an enforcement action. Not to gawk. Not to assign blame. But to harvest lessons for your own organization before you have your own high-visibility close call.
The incident(s) that led to the report
The CFT mission launched June 5, 2024, as a pivotal step toward certifying Starliner to transport astronauts to the International Space Station. It was planned as an 8-to-14-day mission but was extended to 93 days after significant propulsion system anomalies emerged. Ultimately, the Starliner capsule returned uncrewed, while astronauts Barry “Butch” Wilmore and Sunita “Suni” Williams returned aboard SpaceX’s Crew-9 Dragon in March 2025. In February 2025, NASA chartered a Program Investigation Team (PIT) to examine the technical, organizational, and cultural factors contributing to the anomalies.
The report describes four major hardware anomaly areas, including Service Module RCS thruster fail-offs that temporarily caused a loss of 6 Degrees of Freedom control during ISS rendezvous and required in-situ troubleshooting to recover enough capability to dock, a Crew Module thruster failure during descent that reduced fault tolerance, and helium manifold leaks where seven of eight Service Module helium manifolds leaked during the mission. The PIT further determined that the 6DOF loss during rendezvous met criteria for a Type A mishap (or at least a high-visibility close call), underscoring how close the program came to a very different ending.
That is the “what.” For compliance professionals, the “so what” is that NASA did not treat this as a purely engineering problem. It treated it as an integrated system failure, in which culture and leadership either reduce risk or magnify it.
Lesson 1: Decision authority is culture, not paperwork
One of the report’s clearest threads is that fragmented roles and responsibilities delayed decision-making and eroded confidence. In the compliance world, unclear decision rights become the breeding ground for “informal governance”: private conversations, end-runs around committees, and decisions that are never fully documented. Over time, that becomes a shadow-control environment that your policies cannot touch.
Compliance action steps
- Define decision rights for the riskiest calls (high-risk third parties, market entry, major remediation, critical incidents).
- Require a short, written record of: facts reviewed, options considered, dissent captured, decision made, and owner accountable.
- Separate “recommendation authority” from “approval authority” so everyone knows where they sit.
Lesson 2: Transparency is a control, and selective data sharing destroys trust
The report explicitly flags that the lack of data access fueled concerns about selective information sharing. Interviewees described frustration that information could be filtered, selectively chosen, or sanitized, which eroded confidence in the process and people. It also notes reports of questions being labeled “too detailed” or “out of scope” without mechanisms to ensure concerns were addressed. That is the compliance danger zone. When teams believe the narrative matters more than the data, they stop escalating early. They start documenting defensively. They seek safety in silence.
Compliance action steps
- Build “open data” expectations into your incident response and investigative protocols.
- Create a defined pathway for technical or subject-matter dissent to be logged, reviewed, and dispositioned.
- Treat meeting notes and decisions as governed records, not optional artifacts.
Lesson 3: Risk acceptance without rigor becomes “unexplained anomaly tolerance”
NASA calls out “anomaly resolution discipline” and warns that repeated acceptance of unexplained anomalies without root cause can lead to recurrence. That single lesson belongs on a poster in every compliance office. In corporate terms, “unexplained anomalies” are recurring control exceptions, repeat hotline themes, repeated third-party red flags, and audit findings that are “managed” rather than fixed. If leadership normalizes that pattern, it teaches the organization that closure is more important than correction.
Compliance action steps
- Require root cause analysis for repeat issues, not just incident closure.
- Set escalation thresholds for “repeat with no root cause” findings.
- Audit remediation quality, not only remediation completion.
Lesson 4: Partnerships fail when “shared accountability” is not operationalized
The report emphasizes that shared accountability in the commercial model was inconsistently understood and applied. It also notes that historical relationships and private conversations outside formal forums created perceptions of blurred boundaries, favoritism, and lack of objectivity, whether or not those perceptions were accurate. Compliance teams have seen this movie. Think distributors, joint ventures, outsourced compliance support, and major technology partners. If accountability is shared in theory but siloed in practice, something will fall through the cracks. Usually, it falls right into your lap when regulators arrive.
Compliance action steps
- Define “shared accountability” in contracts, governance charters, and escalation protocols.
- Ensure independence and objectivity are protected by design, not by personality.
- Create joint forums where data is shared broadly, dissent is recorded, and decisions are made openly.
Lesson 5: Burnout is a risk factor, and meeting chaos is a governance failure
The report’s recommendations recognize the operational reality: high-pressure environments can degrade decision quality. It calls for “pulse checks,” rotation of high-pressure responsibilities, contingency staffing, and time protection for deep work to proactively address burnout and improve decision-making under mission conditions. Compliance professionals should take that to heart. Crisis cadence is sometimes unavoidable. Permanent crisis cadence is a leadership choice. And it carries predictable consequences: shortcuts, missed details, weakened documentation, and poor judgment.
Compliance action steps
- Build surge staffing plans for investigations and incident response.
- Rotate incident commander roles when events extend beyond days.
- Protect time for analysis, not just meetings and status updates.
Lesson 6: Accountability must be visible, not performative
NASA does not bury the human dimension. The report contains leadership recommendations to speak openly with the joint team about leadership accountability, including concurrence with the report and reclassification as a mishap, and to hold a leadership-led stand-down day focused on reflection, accountability concerns, and rebuilding trust. For corporate leaders, this is where trust is won or lost after a crisis. Employees can tolerate a hard outcome. They struggle to tolerate spin. If your organization communicates externally with confidence but internally with vagueness, your culture learns the wrong lesson: optics first, truth second.
Compliance action steps
- After a major incident, publish an internal accountability and remediation plan with owners and timelines.
- Provide regular updates on what has been completed, what is delayed, and why.
- Make it safe for the workforce to ask questions in interactive forums, as NASA recommends.
Lesson 7: Trust repair requires a plan, not a pep talk
One of the most useful artifacts in the report is a sample Organizational Trust Plan. It sets a goal to rebuild trust by establishing clear expectations, open accountability, and shared commitment to safety and mission success. It includes objectives around transparent communication, acknowledging past challenges, reinforcing shared values, and structured engagement. It then lays out action steps: leadership engagement, facilitated sessions, outward expressions of accountability, teamwide rollout, training and coaching, and communication through a written plan and regular updates.
That is exactly the kind of operational discipline compliance leaders should bring to culture work. Culture does not change because someone gives a speech. Culture changes when the organization changes how it makes decisions, treats dissent, and follows through.
Five key takeaways for the compliance professional
- Clarify decision rights before the crisis. Ambiguity becomes politics under pressure.
- Make transparency non-negotiable. Perceived filtering of data destroys credibility.
- Do not normalize unexplained anomalies. Repeat issues without a root cause are future failures.
- Operationalize shared accountability with partners. Otherwise, it is a slogan.
- Rebuild trust with a written plan and visible accountability. Trust repair is a managed process.
In the end, the Starliner lesson for compliance is simple: controls matter, but culture decides whether controls work when it counts. If leadership cannot run disagreements well, cannot share data broadly, and cannot demonstrate accountability after the fact, the best-written compliance program in the world will fail the moment the pressure rises.