Categories
All Things Investigations

ATI In-House Insights: Challenges and Tips for Navigating a Changing Risk Landscape with Sarah Iles

In this episode of the ATI: InHouse Insights Podcast, Mike DeBernardis speaks with seasoned in-house compliance leader Sarah Isles about navigating an ever-changing risk landscape shaped by political, geopolitical, regulatory, and technological shifts. 

Sarah shares her background across manufacturing sectors and discusses how multinational compliance risks evolve as jurisdictional priorities shift, including sanctions, export controls, tariffs, sustainability, labor rights, data protection, and AI. They identify internal challenges, including a lack of infrastructure to address new risks, siloed ownership, and weak change management, and emphasize clear governance and accountability. Sarah advises “back to basics,” using DOJ’s Evaluation of Corporate Compliance Programs, focusing on real risk mitigation over form-heavy questionnaires, keeping communication channels open through formal committees and informal connections, scaling risk assessments appropriately, targeting communications to relevant audiences, escalating thoughtfully, and building resilient programs by expecting and embracing constant change.

Key highlights:

  • Geopolitics Drives Risk
  • Internal Adaptation Hurdles
  • Silos and Ownership
  • Culture and Change
  • Proactive Compliance Basics
  • Partnering With Business
  • Right-Sized Risk Assessments
  • Communicating Emerging Risks

Resources:

Sarah Iles LinkedIn

Mike DeBernardis LinkedIn

ATI: In-House Insights Podcast

Hughes Hubbard & Reed Website

Categories
Blog

AI Is Only as Good as the Data: What Compliance Leaders Need to Know About Data Readiness

There is an old lesson in compliance that remains evergreen: bad facts produce bad decisions. The same is true for data science: Garbage In, Garbage Out (GIGO). In the GenAI era, that lesson has a new twist. Bad data produces bad outputs at machine speed.

That is why the report, Taming the Complexity of AI Data Readiness, deserves the attention of every Chief Compliance Officer, compliance technologist, and board member who asks management, “What is our AI strategy?” The better follow-up question is, “What is our data readiness strategy?” Because the report makes one point with unmistakable clarity: the model is not the mission; the data foundation is.

For compliance professionals, this is not a technical side issue. It is central to the enterprise risk conversation. If your organization is training, testing, or deploying AI on messy, siloed, biased, stale, or poorly governed data, you are not building a competitive advantage. You are an industrializing risk.

The Dirty Little Secret of Enterprise AI

The report lays out a reality that will not surprise anyone who has lived through a data initiative. Most organizations are not ready. Only 7% of survey respondents said their company’s data was completely ready for AI adoption. By contrast, 51% said it was only somewhat ready, while 27% said it was not very or not at all ready. Only 42% said their organization had high trust in its AI data, and 73% agreed their company should prioritize AI data quality more than it currently does. That should give every compliance officer pause.

We are living through a corporate rush toward GenAI, yet most companies are still stuck at the same old starting line: fragmented, inconsistent, poorly governed data. Many AI conversations inside companies still begin with use cases, copilots, and vendor demos. Far fewer begin with data lineage, data permissions, data quality, or governance maturity. That is a mistake.

If the underlying data is unreliable, the downstream output will be unreliable as well. Worse, it may arrive dressed up in polished prose, persuasive charts, or tidy summaries that create a false sense of confidence. In compliance with that, it is especially dangerous. Whether the use case is sanctions screening, due diligence, internal investigations, policy management, financial controls, or regulatory reporting, a bad answer delivered quickly is still a bad answer.

Bad Data Is Not Just a Tech Problem

One of the most useful parts of the report is how it frames the core barriers. The top challenge cited by respondents was siloed data and difficulty integrating sources at 56%. After that, a lack of a clear data strategy ranked 44%, and data quality or bias issues ranked 41%. Other concerns included regulatory constraints on data use, unclear data lineage, inadequate security, and outdated data. Every one of those should sound familiar to compliance professionals.

Siloed data means incomplete visibility. Weak lineage means you may not be able to defend how an answer was generated. Bias in the data means distorted outputs. Outdated data means inaccurate decisions. Weak security exposes sensitive information. Regulatory constraints mean the company may not even have the right to use certain data the way its AI aspirations assume.

The report underscores this point. 52% of respondents identified inaccurate or biased AI results as a top concern, while 40% cited the loss of security or intellectual property. That is not abstract. That is the modern compliance risk register.

Can We Trust the Data?

A quote from Teresa Tung of Accenture in the report is worth lingering over. She said data readiness means “you can access data to see an accurate view of what is happening in your business and what you can do about it.” That is also a very good working definition of compliance intelligence.

A mature compliance program helps a company understand what is happening inside the business and what should be done in response. That means your hotline data, your gifts and entertainment data, your training metrics, your third-party files, your investigation records, and your control data all need to mean what you think they mean.

The report makes this point with a simple example. Price data is not useful unless you know whether it is in U.S. or Australian dollars, whether it is a unit or bulk price, and when it applies. The compliance equivalent is easy to imagine. A third-party risk flag is not useful unless you know what triggered it, what jurisdiction it covers, how recently it was refreshed, what source produced it, and whether anyone validated it. Context is a control. Without it, data can mislead just as easily as it can inform.

Why This Is Becoming a Board-Level Issue

Another important finding is that only 23% of organizations have created a data strategy for AI adoption, although 53% are currently developing one. In other words, companies know they have a problem, but most are still working through it. This is where compliance can truly function as a business enabler.

The best compliance leaders know that governance is not the enemy of innovation. Governance is what makes innovation scalable and sustainable. If the business wants to use AI at scale, compliance should request a documented AI data strategy that addresses security, privacy, data quality, governance, accessibility, bias management, and alignment with business objectives.

The report found that security and protection of sensitive data were the most critical elements of such plans, at 59%, followed by clean, usable data quality at 46% and data governance at 41%. That is not just an IT checklist. That is a board conversation.

Bring AI to the Data

The report also discusses a concept compliance professionals need to understand: data gravity. Large and sensitive data sets tend to stay where they are because moving them is costly, slow, and risky. Increasingly, organizations are turning to architectures that bring AI processing to the data rather than moving data to the model. The report highlights approaches, such as zero-copy access and containerized applications, that can reduce latency, control costs, and address security and sovereignty concerns. This matters greatly for compliance.

Many regulated environments cannot simply move sensitive data across systems or borders because a vendor wants a cleaner AI workflow. Privacy laws, localization rules, contracts, and plain good judgment all cut against that approach. If AI can be brought to the data rather than copying data into multiple new environments, the organization may reduce both operational and compliance risk.

Compliance officers do not need to become cloud architects. But they do need to ask the right questions. Are we duplicating sensitive data unnecessarily? Are we crossing jurisdictional lines? Can we explain lineage, access, and security? Are we creating an AI environment that looks controlled or improvised?

Agentic AI: Real Promise, Real Risk

The report is optimistic about the potential of agentic AI for data management. 47% of respondents said their organizations believe agentic AI can solve data quality issues, and 65% expect many business processes to be augmented or replaced by agentic AI over the next 2 years. Experts cited benefits such as mapping data, documenting it, performing quality checks, monitoring drift, and automating routine tasks that previously required significant manual effort.

There is real promise here. Compliance teams spend far too much time on manual work that adds little strategic value. Tools that can responsibly automate mapping, documentation, testing, triage, or drift monitoring deserve serious attention.

But this is no place for magical thinking. The report is equally clear that success requires the right team: data engineers, domain experts, prompt expertise, and a product owner aligned to a business objective. That is the lesson. Agentic AI does not eliminate the need for governance. It raises the stakes for governance. If you automate poor judgment on top of poor data, you do not get efficiency. You get scalable failure.

Five Questions for Every CCO

So what should compliance leaders do now? Start with five questions.

  1. Which AI use cases in our company depend on sensitive, regulated, or high-risk data?
  2. Can we explain the lineage, quality, freshness, permissions, and context of that data?
  3. Do we have a documented AI data strategy, or are we confusing pilots with governance?
  4. Are we moving data in ways that create avoidable privacy, security, or sovereignty risks?
  5. Who owns the meaning of the data?

That final question may be the most important. The report stresses that the business must own the data so it is described properly and used correctly. Data is not just a technical asset. It is a business asset with legal, ethical, and operational meaning. Compliance should insist that meaning be defined before AI starts drawing inferences from it.

The Bottom Line

The great temptation in the AI era is to focus on the model’s brilliance. The wiser course is to focus on the data’s readiness. That is where trust begins. That is where defensibility begins. And that is where sustainable value begins. For compliance professionals, the message is plain. AI governance that ignores data readiness is not governance at all. It is wishful thinking with a dashboard.

The organizations that win with AI will not simply have more tools. They will have better data, better lineage, better controls, better discipline, and better judgment about when and how to use AI. In compliance, that is not glamorous. But it is where real success usually lives.

Categories
Blog

Aly McDevitt Week: Part 5 – Ransomware, Crisis Response, and the Compliance Imperative to Move Fast

This week, I want to pay tribute to my former Compliance Week colleague, Aly McDevitt, who announced on LinkedIn that she was retiring from CW to become a full-time mother. I wrote a tribute to Aly, which appeared in CW last week. To prepare to write that piece, I re-read her long-form case studies, which she wrote over the years for CW. They are as compelling today as when she wrote them. This week, I will be paying tribute to Aly by reviewing five of her pieces. The schedule for this week is:

Monday: A Tale of Two Storms

Tuesday: Coming Clean

Wednesday: Inside a Dark Pact

Thursday: Reaching Into the Value Chain

Friday: Ransomware Attack: An immersive case study of a cyber event based on real-life scenarios

McDevitt took a different but highly effective approach in this case study. Rather than centering the story on a single historical corporate scandal, she crafted an immersive fictional scenario grounded in real-life attacks, expert interviews, and public guidance. Compliance Week made clear that, while the company and its characters are imagined, the legal, operational, and compliance issues are very real. That makes this piece especially valuable for compliance professionals because it is less a postmortem of one company and more a practical field manual for the next crisis.

McDevitt’s story begins where many cyber incidents begin: with a person, not a machine.

A longtime employee, Betsy, receives an “urgent” email that appears to be from her boss. She clicks a malicious link, lands on a phony, internal-looking site, realizes too late that something is wrong, and then makes the mistake that turns a bad moment into a corporate crisis: she does not report it. Her silence gives the attacker time. Within days, the company, Vulnerable Electric (VE), a private utility serving 1.4 million customers with about 600 employees and $250 million in annual revenue, is facing a full-blown ransomware attack.

That is the first lesson, and McDevitt drives it home with precision. Ransomware is often described as a technology problem, but the first failure is frequently human, organizational, and cultural. Betsy clicked. But more importantly, she hesitated, feared blame, and kept quiet. As McDevitt explains through the expert commentary, her biggest mistake was not simply opening the link. It was actively deciding not to report the incident to the proper internal authority.

For compliance officers, that point should sound very familiar. Whether the issue is corruption, harassment, sanctions, safety, or cyber, organizations do not fail only because something bad happens. They fail because people do not feel safe reporting it quickly.

McDevitt also lays out why this issue matters so much now. She notes that ransomware payments in 2020 reached roughly $350 million, a more than 300 percent increase from the prior year, and that proactive prevention is no longer optional. She further situates the case study in the context of critical infrastructure, noting that entities such as utilities are subject to heightened scrutiny and are encouraged to align with the NIST cybersecurity framework. In other words, ransomware is not just an IT nuisance. It is an enterprise risk, a regulatory risk, and in some sectors a national security risk.

Once the attack is recognized, McDevitt shows the company doing something right: it moves into a structured response. The CEO activates the full cyber incident response team, or CIRT, and the war room includes not only technical leaders and legal counsel, but also the chief compliance officer, the head of communications, external incident response professionals, and other essential decision-makers. This is exactly what a mature response should look like. Cyber incidents do not fall under a single function. They are enterprise events.

I particularly appreciated how McDevitt uses the case study to underline the role of compliance. The CCO is not there as decoration. The article makes clear that if employee data has been exfiltrated, the incident constitutes a personal data disclosure with potentially local, state, and international notification consequences, and that compliance and legal personnel should be in the room from the start. That is a crucial point for corporate compliance professionals. Cyber risk management is not separate from compliance. It is now one of compliance’s core operating terrains.

McDevitt also captures the psychology of the first 36 hours. Anthony Ferrante says those hours are extremely stressful for a CEO, who is simultaneously thinking about operations, data, reputation, and people. That observation matters because it explains why preparation before an attack is so important. You do not want your executives inventing a process under duress. McDevitt reports that VE had already created an incident playbook with roles, escalation steps, and a five-part response framework: facts, business impact, root cause, corrective actions, and lessons learned. That is the kind of disciplined structure compliance leaders should insist upon.

Another strength of McDevitt’s reporting is her treatment of communications. Too many organizations still believe communications should be brought in late, after the lawyers and technologists finish their work. McDevitt, through multiple expert voices, makes the opposite case. Communications should have a seat at the table, not at the back wall. The reason is straightforward: stakeholders will forgive many things, but they will not forgive caginess. VE’s communications lead rightly argues that employees and customers should hear from the company first, not from the media or the attacker.

This point becomes even sharper when McDevitt contrasts VE’s approach with the real-life story of “Melvin,” an employee at another firm that remained offline for 10 days with no formal communication and did not disclose the sensitive data breach to employees in a timely or transparent way. That section may be the most important communications lesson in the entire piece. Employees are not bystanders. They are among the primary victims of a data breach, and they know when something is wrong. Silence destroys trust.

Then comes the hard question at the center of nearly every ransomware story: Do you pay?

McDevitt wisely resists easy moralizing. She notes the FBI’s official position is not to pay, because payment fuels the criminal business model and does not guarantee restoration. Yet she also reports the practical view of experienced practitioners: payment is not illegal per se, and companies often face a grim choice among bad options. The anonymous chief compliance officer quoted in the case study says it best: there are no good options, only the least bad option.

McDevitt’s two parallel paths, pay and do not pay, are particularly useful because they show that neither choice is clean. In Path A, VE pays $5 million, gets imperfect decryption support, recovers faster, but then faces scrutiny over whether it should have consulted OFAC before payment and whether it may have paid a sanctioned party. In Path B, VE does not pay, endures a longer recovery, suffers a data breach, and still faces reputational and legal fallout. McDevitt’s point is not that one route is right and one is wrong. Her point is that ransomware decision-making is governance under pressure.

That is why the postmortem matters so much. McDevitt closes the case study by emphasizing that the long-term impacts fall into three risk buckets: reputational, legal, and regulatory. She then turns to practical lessons: train the workforce, strengthen spam filters, run tabletop exercises, isolate infected devices immediately, secure backups offline, contact law enforcement quickly, do not rush engagement with the attacker, and communicate with each stakeholder group in a timely and tailored way. She also adds smart recommendations on canary files, forensic retainers, access reviews, logging, threat intelligence monitoring, and industry information sharing.

Finally, McDevitt ends on a note that compliance professionals should not miss. Betsy is not scapegoated. She is thanked for telling the truth and invited to participate in a phishing-resilience campaign for other employees. That is not sentimentality. That is culture. If your response to human error is humiliation, people will hide problems. If your response is accountability plus learning, people will surface them.

That may be the most important compliance lesson of all. Ransomware is a cyber crisis, but surviving it depends on culture, governance, and trust just as much as on technology.

I hope you have enjoyed reading about Aly’s case studies for CW. I am a columnist for Compliance Week.

Categories
Blog

The Fall of the Alamo and Empowerment of the Compliance Professional

Today is the anniversary of the most historic day of many in the history of the great state of Texas, the date of the fall of the Alamo. While March 2, Texas Independence Day, is when Texas declared its independence from Mexico, and April 21, San Jacinto Day, is when Texas won its independence from Mexico, both probably have more long-lasting significance. If there is one word that Texas is known for worldwide, it is the Alamo. The Alamo was a crumbling Catholic mission in San Antonio where 189 men were held out for 13 days by the Mexican Army of General Santa Anna, which numbered approximately 5,000. But in 1836, Santa Anna unleashed his forces, which overran the mission and killed all the fighting men. Those who did not die in the attack were executed, and all the deceased bodies were unceremoniously burned. Proving he was not without chivalry, Santa Anna spared the lives of the Alamo’s women, children, and slaves. But for Texans across the globe, this is our day.

While Thermopylae will always go down as the greatest ‘Last Stand’ battle in history, the Alamo is in contention for Number 2. Like all such battles, sometimes the myth becomes the legend, and the legend becomes the reality. In Thermopylae, the myth is that 300 Spartans stood against the entire 10,000-man Persian Army. However, there was also a force of 700 Thespians (not actors, but citizens from the City-State of Thespi) and a contingent of 400 Thebans fighting alongside the 300 Spartans. Somehow, their sacrifices have been lost to history.

Likewise, the legend that elevates the Alamo battle to myth is the line in the sand. The story goes that William Barrett Travis, on March 5, the day before the final attack, when it was clear that no reinforcements would arrive in time and everyone who stayed would perish, called all his men into the plaza of the compound. He then pulled out his saber and drew a line in the ground. He said that they were surrounded and would all likely die if they stayed. Any man who wanted to stay and die for Texas should cross the line and stand with him. Only one man, Moses Rose, declined to cross the line. The immediate survivors of the battle did not relate this story after they were rescued, and the line-in-the-sand tale did not appear until the 1880s.

But the thing about ‘last stand’ battles is that they generally turn out badly for the losers. Very badly. I thought about this when Chuck Duross, back when he was head of the Department of Justice’s (DOJ) Foreign Corrupt Practices Act (FCPA) unit, said at a conference that he viewed anti-corruption compliance practitioners as “The Alamo” in terms of the last line of defense in the prevention of compliance violations. I gingerly raised my hand and acknowledged his tribute to the great state of Texas, but pointed out that all the defenders were slaughtered, so perhaps another analogy was appropriate. Everyone had a good laugh at the conference back then. But in reflecting on the history of my state and what the Alamo means to us all, I have wondered if my initial response was too facile.

What happens to a Chief Compliance Officer (CCO) or compliance practitioner when they have to make a stand? Do they make the ultimate corporate sacrifice? Will they receive the equivalent of a corporate execution as the defenders of the Alamo received? This worrisome issue occurred even if the person had resigned to pursue other opportunities.’ Michael Scher has been a leading voice in protecting compliance officers. In a post entitled Michael Scher Talks to the Feds, he said, “A compliance officer (CO) working in Asia asked for recognition and protection: “A CO will not stand up against the huge pressure to maintain compliance standards if he does not get sufficient protection under the law. Most COs working in the overseas operations of U.S. companies are not U.S. citizens, but they are usually the first to identify violations. Since the FCPA deals with foreign corruption, how could the DOJ and SEC not protect these COs? “

The DOJ is now looking at the quality of your CCO and compliance function and how they are perceived, treated, and received in the corporate setting. In the 2024 Evaluation of Corporate Compliance Programs (2024 ECCP), the DOJ expanded its inquiry to evaluate the “sufficiency of the personnel and resources within the compliance function, in particular, whether those responsible for compliance have: (1) sufficient seniority within the organization; (2) sufficient resources, namely, staff to effectively undertake the requisite auditing, documentation, and analysis; and (3) sufficient autonomy from management, such as direct access to the board of directors or the board’s audit committee.”

Further, there were four specific areas of inquiry and evaluation: (1) Structure, (2) Experience and Qualifications, (3) Funding and Resources, and (4) Autonomy.

In the section entitled “Structure,” the evaluation made the following inquiries:

  • How does the compliance function compare with other strategic functions in the company in terms of stature, compensation levels, rank/title, reporting line, resources, and access to key decision-makers?
  • What has been the turnover rate for compliance and relevant control function personnel?
  • What role has compliance played in the company’s strategic and operational decisions? How has the company responded to specific instances where compliance raised concerns?
  • Have any transactions or deals been stopped, modified, or further scrutinized due to compliance concerns?

In the section entitled “Experience and Qualifications,” the 2024 ECCP made the following inquiries:

  • Do compliance and control personnel have the appropriate experience and qualifications for their roles and responsibilities?
  • Has the level of experience and qualifications in these roles changed over time?
  • Who reviews the compliance function’s performance, and what is the review process?

In the area of “Funding and Resources,” the 2024 ECCP asked:

  • Has there been sufficient staffing for compliance personnel to effectively audit, document, analyze, and act on the results of the compliance efforts?
  • Has the company allocated sufficient funds for this?
  • Have there been times when requests for resources by compliance and control functions have been denied, and if so, on what grounds?

Finally, in the area of “Autonomy,” the 2024 ECCP asked:

  • Do the compliance and relevant control functions have direct reporting lines to any member of the board of directors and/or the audit committee?
  • How often do they meet with directors?
  • Are members of the senior management present for these meetings?
  • How does the company ensure the independence of the compliance and control personnel?

These were all deeper and more robust, focusing on the CCO and the DOJ compliance team. If your compliance team is run on a shoestring, you will likely be downgraded for your overall commitment to FCPA compliance. The same is true for promotions and other advancement opportunities within an organization. Not many organizations have a compliance function so mature that a CCO is appointed to another senior-level position.

Upon further reflection, Duross was correct, and the Alamo reference was appropriate for compliance officers. Sometimes we must draw a line in the sand with management. And when we do, we have to cross that line to get on the right side of the issue, and the consequences be damned. The DOJ has clarified that it expects CCOs and compliance professionals to draw that line when necessary, and that when they do, companies must heed their warnings.

Categories
Blog

The Starliner, Culture and Compliance: Leadership Lessons from a NASA Investigation Report

Corporate compliance professionals spend a lot of time talking about controls, training, third parties, and investigations. Yet the hard truth is that the most important control environment sits above all of that: leadership behavior and the culture it creates. That is why this NASA investigation report on the Boeing CST-100 Starliner Crewed Flight Test (CFT) is such a useful case study. It is a technical report, to be sure. But it is also a cultural, leadership, and governance report. NASA’s bottom line is unambiguous: technical excellence and safety require transparent communication and clear roles and responsibilities, not as slogans, but as operating requirements that must be institutionalized so safety is never compromised in pursuit of schedule or cost.

If you are a Chief Compliance Officer, General Counsel, or business leader, you should read this report the way you read an enforcement action. Not to gawk. Not to assign blame. But to harvest lessons for your own organization before you have your own high-visibility close call.

The incident(s) that led to the report

The CFT mission launched June 5, 2024, as a pivotal step toward certifying Starliner to transport astronauts to the International Space Station. It was planned as an 8-to-14-day mission but was extended to 93 days after significant propulsion system anomalies emerged. Ultimately, the Starliner capsule returned uncrewed, while astronauts Barry “Butch” Wilmore and Sunita “Suni” Williams returned aboard SpaceX’s Crew-9 Dragon in March 2025. In February 2025, NASA chartered a Program Investigation Team (PIT) to examine the technical, organizational, and cultural factors contributing to the anomalies.

The report describes four major hardware anomaly areas, including Service Module RCS thruster fail-offs that temporarily caused a loss of 6 Degrees of Freedom control during ISS rendezvous and required in-situ troubleshooting to recover enough capability to dock, a Crew Module thruster failure during descent that reduced fault tolerance, and helium manifold leaks where seven of eight Service Module helium manifolds leaked during the mission. The PIT further determined that the 6DOF loss during rendezvous met criteria for a Type A mishap (or at least a high-visibility close call), underscoring how close the program came to a very different ending.

That is the “what.” For compliance professionals, the “so what” is that NASA did not treat this as a purely engineering problem. It treated it as an integrated system failure, in which culture and leadership either reduce risk or magnify it.

Lesson 1: Decision authority is culture, not paperwork

One of the report’s clearest threads is that fragmented roles and responsibilities delayed decision-making and eroded confidence. In the compliance world, unclear decision rights become the breeding ground for “informal governance”: private conversations, end-runs around committees, and decisions that are never fully documented. Over time, that becomes a shadow-control environment that your policies cannot touch.

Compliance action steps

  • Define decision rights for the riskiest calls (high-risk third parties, market entry, major remediation, critical incidents).
  • Require a short, written record of: facts reviewed, options considered, dissent captured, decision made, and owner accountable.
  • Separate “recommendation authority” from “approval authority” so everyone knows where they sit.

Lesson 2: Transparency is a control, and selective data sharing destroys trust

The report explicitly flags that the lack of data access fueled concerns about selective information sharing. Interviewees described frustration that information could be filtered, selectively chosen, or sanitized, which eroded confidence in the process and people. It also notes reports of questions being labeled “too detailed” or “out of scope” without mechanisms to ensure concerns were addressed. That is the compliance danger zone. When teams believe the narrative matters more than the data, they stop escalating early. They start documenting defensively. They seek safety in silence.

Compliance action steps

  • Build “open data” expectations into your incident response and investigative protocols.
  • Create a defined pathway for technical or subject-matter dissent to be logged, reviewed, and dispositioned.
  • Treat meeting notes and decisions as governed records, not optional artifacts.

Lesson 3: Risk acceptance without rigor becomes “unexplained anomaly tolerance”

NASA calls out “anomaly resolution discipline” and warns that repeated acceptance of unexplained anomalies without root cause can lead to recurrence. That single lesson belongs on a poster in every compliance office. In corporate terms, “unexplained anomalies” are recurring control exceptions, repeat hotline themes, repeated third-party red flags, and audit findings that are “managed” rather than fixed. If leadership normalizes that pattern, it teaches the organization that closure is more important than correction.

Compliance action steps

  • Require root cause analysis for repeat issues, not just incident closure.
  • Set escalation thresholds for “repeat with no root cause” findings.
  • Audit remediation quality, not only remediation completion.

Lesson 4: Partnerships fail when “shared accountability” is not operationalized

The report emphasizes that shared accountability in the commercial model was inconsistently understood and applied. It also notes that historical relationships and private conversations outside formal forums created perceptions of blurred boundaries, favoritism, and lack of objectivity, whether or not those perceptions were accurate. Compliance teams have seen this movie. Think distributors, joint ventures, outsourced compliance support, and major technology partners. If accountability is shared in theory but siloed in practice, something will fall through the cracks. Usually, it falls right into your lap when regulators arrive.

Compliance action steps

  • Define “shared accountability” in contracts, governance charters, and escalation protocols.
  • Ensure independence and objectivity are protected by design, not by personality.
  • Create joint forums where data is shared broadly, dissent is recorded, and decisions are made openly.

Lesson 5: Burnout is a risk factor, and meeting chaos is a governance failure

The report’s recommendations recognize the operational reality: high-pressure environments can degrade decision quality. It calls for “pulse checks,” rotation of high-pressure responsibilities, contingency staffing, and time protection for deep work to proactively address burnout and improve decision-making under mission conditions. Compliance professionals should take that to heart. Crisis cadence is sometimes unavoidable. Permanent crisis cadence is a leadership choice. And it carries predictable consequences: shortcuts, missed details, weakened documentation, and poor judgment.

Compliance action steps

  • Build surge staffing plans for investigations and incident response.
  • Rotate incident commander roles when events extend beyond days.
  • Protect time for analysis, not just meetings and status updates.

Lesson 6: Accountability must be visible, not performative

NASA does not bury the human dimension. The report contains leadership recommendations to speak openly with the joint team about leadership accountability, including concurrence with the report and reclassification as a mishap, and to hold a leadership-led stand-down day focused on reflection, accountability concerns, and rebuilding trust. For corporate leaders, this is where trust is won or lost after a crisis. Employees can tolerate a hard outcome. They struggle to tolerate spin. If your organization communicates externally with confidence but internally with vagueness, your culture learns the wrong lesson: optics first, truth second.

Compliance action steps

  • After a major incident, publish an internal accountability and remediation plan with owners and timelines.
  • Provide regular updates on what has been completed, what is delayed, and why.
  • Make it safe for the workforce to ask questions in interactive forums, as NASA recommends.

Lesson 7: Trust repair requires a plan, not a pep talk

One of the most useful artifacts in the report is a sample Organizational Trust Plan. It sets a goal to rebuild trust by establishing clear expectations, open accountability, and shared commitment to safety and mission success. It includes objectives around transparent communication, acknowledging past challenges, reinforcing shared values, and structured engagement. It then lays out action steps: leadership engagement, facilitated sessions, outward expressions of accountability, teamwide rollout, training and coaching, and communication through a written plan and regular updates.

That is exactly the kind of operational discipline compliance leaders should bring to culture work. Culture does not change because someone gives a speech. Culture changes when the organization changes how it makes decisions, treats dissent, and follows through.

Five key takeaways for the compliance professional

  1. Clarify decision rights before the crisis. Ambiguity becomes politics under pressure.
  2. Make transparency non-negotiable. Perceived filtering of data destroys credibility.
  3. Do not normalize unexplained anomalies. Repeat issues without a root cause are future failures.
  4. Operationalize shared accountability with partners. Otherwise, it is a slogan.
  5. Rebuild trust with a written plan and visible accountability. Trust repair is a managed process.

In the end, the Starliner lesson for compliance is simple: controls matter, but culture decides whether controls work when it counts. If leadership cannot run disagreements well, cannot share data broadly, and cannot demonstrate accountability after the fact, the best-written compliance program in the world will fail the moment the pressure rises.

Categories
Blog

5 Strategic Board Playbooks for AI Risk (and a Bootcamp)

Artificial intelligence is no longer a future-state technology risk. It is a current-state governance issue. If AI is being deployed inside governance, risk, and compliance functions, then it is already shaping how your company detects misconduct, prioritizes investigations, manages regulatory obligations, and measures program effectiveness. That makes AI risk a board agenda item, not a management footnote.

In an innovation-forward organization, the goal is not to slow AI adoption. The goal is to professionalize it. Board of Directors and Chief Compliance Officers (CCOs) should approach AI the way they approached cybersecurity a decade ago: move it from “interesting updates” to a structured reporting cadence with measurable controls, clear accountability, and director education that raises the collective literacy of the room.

Today, we consider 5 strategic playbooks designed for a Board of Directors and a CCO operating in an industry-agnostic environment, building AI in-house, without a model registry yet, and with a cross-functional AI governance committee chaired and owned by Compliance. The program must also work across multiple regulatory regimes, including the DOJ Evaluation of Corporate Compliance Programs (ECCP), the EU AI Act, and a growing patchwork of state laws. We end with a proposal for a Board of Directors Boot Camp on their responsibilities to oversee AI in their organization.

Playbook 1: Put AI Risk on the Calendar, Not on the Wish List

If AI risk is always “important,” it becomes perpetually postponed. The first play is procedural: create a standing quarterly agenda item with a consistent structure.

Quarterly board agenda structure (20–30 minutes):

  1. What changed since last quarter? Items such as new use cases, material model changes, new regulations, and major control exceptions.
  2. AI full Risk Dashboard, with 8–10 board KPIs, trends, and thresholds.
  3. Top risks and mitigations, including three headline risks with actions, owners, and dates.
  4. Assurance and testing, which would include internal audit coverage, red-teaming results, and remediation progress.
  5. Decisions required include policy approvals, risk appetite adjustments, and resourcing.

This cadence does two things. First, it forces repeatability. Second, it creates institutional memory. Boards govern better when they can compare quarter-over-quarter progress, not when they receive one-off deep dives that cannot be benchmarked.

Playbook 2: Build the AI Governance Operating Model Around Compliance Ownership

In your design, Compliance owns AI governance and its use throughout the organization, supported by a cross-functional AI governance committee. That is a strong model, but only if it is explicit about responsibilities.

Three lines of accountability:

  • Compliance (Owner): policy, risk framework, controls, training, and board reporting.
  • AI Governance Committee (Integrator): cross-functional prioritization, approvals, escalation, and issue resolution.
  • Build Teams (Operators): documentation, testing, change control, and implementation evidence.

Boards should ask one simple question each quarter: Who is accountable for AI governance, and how do we know it is working? If the answer is “everyone,” then the real answer is “no one.” Your model makes the answer clear: Compliance owns it, and the committee operationalizes it.

Playbook 3: Create the AI Registry Before You Argue About Controls

You have no model registry yet. That is the first operational gap to close, because you cannot govern what you cannot inventory. In a GRC context, this is not a “nice to have.” Without an inventory, you cannot prove coverage, you cannot scope an audit, you cannot define reporting, and you cannot explain to regulators how you know where AI is influencing decisions.

Minimum viable AI registry fields (start simple):

  • Use case name and business owner;
  • Purpose and decision impact (advisory vs. automated);
  • Data sources and data sensitivity classification;
  • Model type and version, with change log;
  • Key risks (bias, privacy, explainability, security, reliability);
  • Controls mapped to the risk (testing, monitoring, approvals);
  • Deployment status (pilot, production, retired); and
  • Incident history and open issues.

Boards do not need the registry details. They need the coverage metric and the assurance that the registry is complete enough to support governance.

Playbook 4: Align to the ECCP, EU AI Act, and State Laws Without Creating a Paper Program

Many organizations make a predictable mistake: they respond to multiple frameworks by producing multiple binders. That creates activity, not effectiveness. A better approach is to use a single control architecture to map to multiple requirements. The board should see one integrated story:

  • DOJ ECCP lens: effectiveness, testing, continuous improvement, accountability, and resourcing;
  • EU AI Act lens: risk classification, transparency, human oversight, quality management, and post-market monitoring; and
  • State law lens: privacy, consumer protection concepts, discrimination prohibitions, and notice requirements where applicable

This mapping becomes powerful when it ties back to the board dashboard. The board is not there to read statutes. The board is there to govern outcomes.

Playbook 5: Use a Board Dashboard That Measures Coverage, Control Health, and Outcomes

You asked for a combined dashboard and narrative with 8–10 KPIs. Here is a board-level set designed for AI in governance, risk, and compliance functions, with in-house build, internal audit, and red teaming for assurance.

Board AI Governance KPIs (8–10)

1. AI Inventory Coverage Rate

Percentage of AI use cases captured in the registry versus estimated footprint.

2. Risk Classification Completion Rate

Percentage of registered use cases risk-classified (EU AI Act style tiers or internal tiers).

3. Pre-Deployment Review Pass Rate

Percentage of deployments that cleared required testing and approvals on first submission.

4. Model Change Control Compliance

Percentage of model changes executed with documented approvals, testing evidence, and rollback plans.

5. Explainability and Documentation Score

Percentage of in-scope use cases with complete documentation, rationale, and user guidance.

6. Monitoring Coverage

Percentage of production use cases with active monitoring for drift, anomalies, and performance degradation.

7. Issue Closure Velocity

Median days to close AI governance issues, by severity.

8. Internal Audit Coverage and Findings Trend

Number of audits completed, rating distribution, repeat findings, and remediation status.

9. Red Team Findings and Remediation Rate

Number of material vulnerabilities identified and percentage remediated within the target time.

10. Escalations and Incident Rate

Number of AI-related incidents or escalations (including near-misses), with severity and lessons learned.

These KPIs do not require vendor controls and align with an in-house build model. They also support both board oversight and compliance management.

AI Director Boot Camp

Your board has a medium level of literacy and needs a boot camp. I agree. Directors do not need to become engineers. They need a common vocabulary and a governance frame. The recommended boot camp design is one-half day, making it highly practical. It should include the following.

  1. AI in the company’s operating model. This means where it touches decisions, risk, and compliance outcomes.
  2. AI risk taxonomy, such as bias, privacy, security, explainability, reliability, third-party, and later.
  3. Regulatory landscape overview, including a variety of laws and regulatory approaches, including the DOJ ECCP approach to effectiveness, the EU AI Act risk framing, and several state law themes approaches.
  4. Governance model walkthrough to ensure the BOD understands the registry, risk classification, controls, monitoring, and escalation.
  5. Tabletop exercises, such as an AI incident in a GRC context with false negatives in monitoring or biased triage.
  6. Board oversight duties. Teach the BOD how they can meet their obligations, including which questions to ask quarterly, which thresholds trigger escalation, and similar insights.

The deliverable from the boot camp should be a one-page “Director AI Oversight Guide” with the KPIs, escalation triggers, and the quarterly agenda structure.

The Bottom Line for Boards and CCOs

This is the moment to treat AI risk like a board-governed discipline. The organizations that get it right will not be the ones with the longest AI policy. They will be the ones with the clearest operating model, the most reliable reporting cadence, and the strongest evidence of control effectiveness.

If Compliance owns AI governance, then Compliance must also own the proof. That proof is delivered through a registry, a quarterly board agenda item, a balanced KPI dashboard, and assurance through internal audit and red teaming. Add a director boot camp to create shared understanding, and you have the beginnings of a program that is innovation-forward and regulator-ready.

That is the strategic playbook: not fear, not hype, but governance.

Categories
Blog

When Your AI Chat Becomes Exhibit A: What United States v. Heppner Means for Compliance Professionals

There are court rulings that quietly shape doctrine, and others that detonate assumptions. The recent decision of Judge Jed Rakoff from the Southern District of New York in United States v. Heppner falls into the latter category. In a February 10, 2026, ruling,  the Court made clear that the attorney-client privilege or the work-product doctrine did not protect materials generated through a third-party generative AI platform. In plain English, what a defendant typed into a public AI system was discoverable.

For compliance professionals, this is not a narrow litigation footnote. It is a flashing red warning light. The era of casual AI experimentation inside corporations is over. Governance now must catch up with adoption. Today, we will consider the Court’s ruling and why it matters to a Chief Compliance Officer.

The Court’s Core Holding

The defendant in Heppner had used a third-party generative AI tool to draft and refine materials that were later shared with counsel. When prosecutors sought production, the defense argued that these materials were protected by privilege and work-product protections. The court disagreed.

The reasoning was straightforward and, frankly, predictable:

  • The AI tool was not an attorney.
  • The terms of service did not guarantee confidentiality and allowed retention or potential disclosure of inputs.
  • The materials were not prepared at the direction of counsel for the purpose of obtaining legal advice.
  • Simply sending AI-generated drafts to counsel after the fact did not, by itself, retroactively cloak them in privilege.

This is a fundamental point: privilege attaches to communications made in confidence for the purpose of seeking legal advice. When an employee enters sensitive facts into a third-party AI platform that disclaims confidentiality, that “confidence” is at best questionable. When those drafts are created independently of counsel’s direction, work-product arguments grow thin. The court did not create a new doctrine. It applied existing principles to new technology. That is precisely why this ruling is so important.

The Illusion of Confidentiality

Many business users treat AI platforms like a digital notebook. They assume that because the interaction occurs on a screen and feels private, it is private. That assumption is dangerous. Public and consumer AI platforms often reserve the right to store, analyze, or use inputs for service improvement. Even when vendors promise limited retention, those commitments may not meet the strict confidentiality standards necessary to preserve privilege. From a legal perspective, once you introduce a third party without adequate confidentiality protections, you risk waiving your rights.

The compliance lesson is blunt: generative AI is not your lawyer, and it is not your secure internal memo system. This is where governance intersects with culture. If employees are entering investigative summaries, draft responses to regulators, internal audit findings, or potential misconduct narratives into public AI tools, you are manufacturing discoverable evidence. That is not a hypothetical risk. That is now a litigated reality.

Why This Is a Board-Level Issue

The Department of Justice has made clear through the Evaluation of Corporate Compliance Programs (ECCP) that companies must identify and manage emerging risks. Artificial intelligence is no longer emerging. It is embedded in operations, marketing, finance, and legal workflows. The Heppner ruling converts AI usage from a technology convenience into a legal risk category. Boards of Directors should be asking:

  • Do we have an inventory of AI tools used across the enterprise?
  • Are employees permitted to input confidential, regulated, or legally sensitive information into third-party platforms?
  • Have we reviewed the vendor’s terms of service regarding confidentiality, retention, and data ownership?
  • Are legal and compliance functions involved in approving AI deployments?

If the answer to any of these questions is uncertain, there is a governance gap. AI governance is no longer solely about bias, explainability, or regulatory compliance. It is also about preserving privilege, managing litigation risk, and managing evidence.

Privilege cannot Be Recreated After the fact.

One of the most significant aspects of the ruling is the rejection of “retroactive privilege.” Sending AI-generated content to counsel after it is created does not transform it into protected communication. This matters for compliance investigations. Consider the following scenario:

An internal report of potential misconduct surfaces. An employee uses a public AI tool to summarize the facts and generate possible legal arguments before reaching out to in-house counsel. That summary now exists outside any protected legal channel. The vendor may retain it. It may be discoverable.

By the time counsel becomes involved, the privilege damage may already be done. The message for compliance teams is clear: legal engagement must precede, or at least direct, sensitive analysis, not follow it.

Work Product Is Not a Safety Net

Some may argue that AI-assisted drafting in anticipation of litigation should fall under the work-product doctrine. The court in Heppner was not persuaded. Work-product protection generally applies to materials prepared by or for an attorney in anticipation of litigation. When individuals independently generate content using AI tools without counsel’s direction, that protection is far from guaranteed. Compliance professionals should not assume that labeling a document “prepared in anticipation of litigation” will insulate AI-generated material. Courts will look at substance over form.

Practical Steps for Compliance Leaders

This ruling demands operational response from every CCO. Here are some steps every compliance program should consider.

1. Treat Third-Party AI as Non-Confidential by Default

Unless you have a contractual, enterprise-level arrangement with robust confidentiality provisions and clear data controls, assume that information entered into a third-party AI platform is not protected. This default posture should be reflected in policy language.

2. Update Acceptable Use Policies

Your code of conduct and IT policies should explicitly address the use of generative AI. Prohibit the entry of:

  • Privileged communications.
  • Investigation details.
  • Personally identifiable information.
  • Trade secrets.
  • Sensitive regulatory communications.

Policy must move from general warnings to specific examples.

3. Involve Legal in AI Governance

AI procurement should not be a purely IT function. Legal and compliance must review vendor terms, especially around:

  • Data retention.
  • Subprocessor use.
  • Confidentiality obligations.
  • Audit rights.
  • Breach notification.

If you cannot articulate how your AI vendor protects inputs, you cannot defend privilege claims.

4. Implement Training That Reflects Real Risk

Annual compliance training should now include explicit guidance on AI usage. Employees should understand that entering confidential information into public AI tools can waive privilege and render it discoverable. Training should include practical scenarios. The objective is behavioral change, not abstract awareness.

5. Establish Secure AI Environments for Legal Work

If your organization intends to use AI in legal or investigative contexts, consider enterprise solutions that:

  • Operate within your controlled environment.
  • Restrict data sharing.
  • Provide contractual confidentiality.
  • Maintain clear audit logs.

Even then, legal oversight is essential. Secure does not automatically mean privileged.h

6. Align with Litigation Hold Procedures

AI interaction logs may constitute discoverable material. Ensure that your litigation hold processes account for AI-generated content. If your organization logs prompts and outputs, those logs may fall within the scope of preservation obligations. Ignoring this dimension creates spoliation risk.

The Cultural Dimension

Technology adoption inside companies often outruns governance. Employees experiment. Business units optimize. Productivity improves. Compliance arrives later. That sequencing is no longer sustainable. The Heppner ruling should catalyze a shift from reactive to proactive governance. AI usage must be mapped, risk-ranked, and monitored, just as third-party intermediaries, high-risk markets, and financial controls are. If your risk assessment does not explicitly include generative AI, it is incomplete.

Connecting to the DOJ’s Expectations

The DOJ has repeatedly emphasized dynamic risk assessment. Artificial intelligence now clearly falls within the scope of corporate compliance evaluation. Prosecutors will not be sympathetic to arguments that “everyone was using it” or that policies were silent. They will ask:

  • Did the company identify AI as a risk area?
  • Did it implement controls?
  • Did it train employees?
  • Did it monitor usage?
  • Did it respond to incidents?

The answers to those questions will influence charging decisions, resolutions, and penalty calculations.

A Final Word: Convenience Versus Control

Generative AI is transformative. It enhances drafting, analysis, and research. It can elevate compliance operations if deployed thoughtfully. However, convenience without control is exposure. The lesson of United States v. Heppner is not that AI should be avoided. It is that AI must be governed with the same rigor as any other high-impact enterprise tool.

Privilege is fragile. Once waived, it cannot be restored. In a world where a chat prompt can become an exhibit, compliance professionals must lead the charge in redefining responsible AI use. If you are a chief compliance officer, this is your moment. Update your policies. Engage your board. Coordinate with legal and IT. Embed AI governance into your compliance framework. Because the next time an AI conversation surfaces in discovery, you do not want to explain why your program treated it like a harmless experiment.

Categories
Blog

AI and Work Intensification – The Compliance Response

There is a comforting myth circulating in corporate hallways and boardrooms: if we deploy AI across governance, risk, and compliance, the work will shrink. Investigations will move faster. Monitoring will get smarter. Policies will draft themselves. Third-party diligence will become push-button. The compliance function will finally “do more with less.” That myth was challenged in a recent Harvard Business Review article, “AI Doesn’t Reduce Work—It Intensifies It by Aruna Ranganathan and Xingqi Maggie Ye.

The authors believe that what happens is work intensification. AI expands throughput, increases expectations, and generates more outputs that still require human judgment, verification, and accountability. Instead of fewer tasks, you get more tasks. Instead of simpler work, you get faster cycles, more iterations, and new forms of quality risk. For the Chief Compliance Officer (CCO) leading AI governance, this is not a side effect. It is a core operating model issue.

If compliance owns AI governance across the enterprise, compliance must also own the discipline of how humans and AI work together. I call that discipline an AI practice standard, management guidance that sets expectations for pace, quality, verification, escalation, and sustainable workload.

Today, we consider how to consider this issue as a compliance operating model challenge across all GRC workflows: policy management, investigations, hotline intake, monitoring and surveillance, third-party due diligence, regulatory change management, audit planning, training, and reporting. The tone is cautionary because the risk is real: a compliance function that mistakes AI output volume for compliance effectiveness.

The Compliance Operating Model Problem: More Output, More Review, More Risk

Compliance work is not manufacturing. It is judgment work. It requires discretion, context, and defensible decisions. AI can accelerate inputs and draft outputs, but it does not accept responsibility. The CCO does. The business does. The board does. When AI enters GRC workflows, it tends to create four pressure points:

1. Compression of timelines. If a draft can be produced in five minutes, someone will ask why it cannot be finalized in five more.

2. Explosion of options. AI generates multiple versions, scenarios, and recommendations, which expands decision load and review cycles.

3. Higher volume of “signals.” AI-enabled monitoring produces more alerts, more pattern matches, and more anomalies. Much will be noise. All require triage.

4. Illusion of completion. Teams begin to treat a plausible AI answer as a finished work product. That is how quality defects are born.

The result is a compliance function that looks “faster” while becoming more fragile. Burnout rises. Rework increases. Errors creep into documentation. Controls become less reliable because the humans operating them are overwhelmed by the sheer volume AI makes possible.

All this means the question for the CCO is not, “How do we roll out AI?” The question is, “How do we govern the human work that AI intensifies?”

Five KPIs for Work Intensification Risk

Next, we consider five KPIs specifically designed to measure work intensification. These are board-credible, compliance-owned, and operationally measurable.

1. After-Hours Compliance Work Index

Percentage of compliance work activity occurring outside standard business hours (for example, 6 p.m. to 7 a.m.), measured across key systems (case management, GRC platform activity logs, email metadata, collaboration tool usage). This matters because AI compresses timelines and pushes work into nights and weekends. This index serves as an early warning for burnout and quality failures.

2. AI Rework Rate

Percentage of AI-assisted work products requiring material revision after human review (policies, investigation summaries, risk narratives, diligence reports). This matters because

if AI increases speed but doubles rework, you are not gaining productivity. You are shifting effort downstream.

3. Cycle Time Compression vs. Quality Defect Ratio

Track cycle time reductions alongside quality defects (corrections, escalations, documentation gaps, audit findings). You can express this KPI as Cycle Time Improvement / Defect Increase.

This matters because faster is not better if defects rise. This ratio keeps leadership honest.

4. Alert-to-Action Conversion Rate

Percentage of AI-generated alerts that result in a confirmed issue, investigation, remediation, or control enhancement. This matters because AI intensifies monitoring. This KPI exposes whether you are drowning in noise or generating actionable intelligence.

5. Burnout Signal Composite

A quarterly composite score built from pulse surveys such as fatigue, workload, autonomy, attrition in compliance roles, sick leave usage trends, and employee assistance program utilization patterns. This matters because compliance effectiveness depends on people. Burnout is a control failure risk.

These five metrics give the CCO and board a shared view of whether AI is improving the compliance function or simply accelerating it toward exhaustion.

How to Measure the Leading Indicators

You requested practical recommendations for measuring after-hours work, cycle time, quality defects, and burnout indicators. Here is a measurement approach that is realistic and defensible.

After-Hours Work

  • Use system log data from the case management, GRC, and document management platforms to track timestamped activity.
  • Supplement with email and collaboration metadata to measure volume outside standard hours.
  • Report trends by team and workflow, not individuals. This is about operating model health, not surveillance.

Cycle Time

  • Establish “start” and “stop” definitions for each workflow:
    • Investigations: intake date to closure date
    • Due diligence: request date to clearance date
    • Policy updates: drafting starts from the published version
    • Regulatory change: trigger identification to implementation
  • Track AI-assisted versus non-AI-assisted cycle times to isolate the impact.

Quality Defects

  • Define defects as “items requiring material correction after initial completion,” including:
    • Incomplete documentation
    • Wrong risk rating or missing rationale
    • Incorrect regulatory mapping
    • Reopened cases due to insufficient analysis
    • Audit findings tied to workflow execution
  • Capture defects through QA sampling, supervisor review logs, audit results, and post-incident reviews.

Burnout Indicators

  • Run a quarterly pulse survey with 5–7 questions on workload, pace, clarity, and ability to disconnect.
  • Track voluntary attrition and vacancy duration for compliance roles.
  • Include aggregate HR indicators such as overtime trends or sick leave usage, where available.
  • Use a composite score and trend it. The trend line is what matters.

The key is to build instrumentation without creating a culture of monitoring employees. Your goal is not to watch people. Your goal is to protect the control environment.

Adopt an Enterprise AI Practice Standard Now

For an innovation-forward company, the right move is not to slow down. The right move is to govern how you speed up. Your call to action is simple and strong: to adopt an enterprise AI practice standard as management guidance, owned by Compliance, implemented across all GRC workflows, measured by five work-intensification KPIs, and tested by internal audit and red teaming.

If you do that, you gain three things immediately:

1. A sustainable operating model

2. Defensible governance for regulators and boards

3. A compliance function that remains credible under pressure

AI can make compliance better. But only if the humans who run compliance can still breathe.

Categories
Blog

From Principle to Proof: Operationalizing AI Governance Through the ECCP and NIST

Artificial intelligence governance has officially crossed the threshold from theory to expectation. The Department of Justice has not issued a standalone “AI rulebook,” but it has provided a framework for compliance professionals to consider the issue: the 2024 Evaluation of Corporate Compliance Programs (ECCP). In this version of the ECCP, the DOJ laid out guidance that any technology capable of creating material business risk must be governed, monitored, and improved like any other compliance risk. That includes artificial intelligence.

Too many organizations still treat AI governance as an ethics exercise, a technical problem, or a future concern. That posture is not defensible. The DOJ does not ask whether your program is fashionable or aspirational. It asks three very old-fashioned questions: Is your compliance program well designed? Is it applied in good faith? Does it work in practice? Those questions apply with full force to AI.

In this post, I want to move the discussion from abstract frameworks to operational reality. I will show how compliance professionals can use the ECCP to structure AI governance, select board-grade KPIs, and demonstrate effectiveness in a way regulators understand. I will also show how the NIST AI Risk Management Framework (NIST Framework) fits neatly underneath this structure as an operating model, not a competing philosophy.

AI Governance Is Already an ECCP Issue

The DOJ has repeatedly emphasized that compliance programs must evolve as business risks evolve. Artificial intelligence is not a future risk. It is already embedded in pricing, hiring, credit decisions, customer interactions, fraud detection, and third-party screening. If an AI model can influence revenue, customer outcomes, or regulatory exposure, it is a compliance risk. Period.

The ECCP does not require companies to eliminate risk. It requires them to identify, assess, manage, and learn from it. AI governance, therefore, belongs squarely inside the compliance program, not off to the side in an innovation lab or technology committee.

The ECCP as an AI Governance Blueprint

The power of the ECCP is its simplicity. Every enforcement action ultimately traces back to the same three questions. Let us apply them directly to AI.

Is the Program Well Designed?

Design begins with risk assessment. If your organization cannot answer a basic question such as “What AI systems do we have, who owns them, and what decisions they influence,” you do not have a program. You have hope. A well-designed AI compliance program starts with an AI asset inventory that identifies models, tools, vendors, and use cases. Each asset must be risk-classified based on business impact, regulatory exposure, and potential harm.

Board-level KPIs here are coverage metrics. How many AI assets have been identified? What percentage has been risk-classified? How many high-impact models have completed an impact assessment before deployment? If your dashboard does not show near-full coverage, the design is incomplete.

Policies and procedures come next. The DOJ does not care how many policies you have. It cares whether they provide clear guidance for real decisions. AI policies should cover the full lifecycle, from design and data sourcing through deployment, monitoring, and retirement. A practical KPI is policy coverage. What percentage of AI assets operate under current, approved procedures? How often are those procedures refreshed? Annual updates are a reasonable baseline in a rapidly changing risk environment.

Is the Program Applied Earnestly and in Good Faith?

Good faith is demonstrated through action, not intent. Training is a central indicator. The DOJ expects role-based training tailored to actual risk. A generic AI awareness course does not meet this standard. Developers, model owners, compliance reviewers, and business leaders all require different training. Completion rates matter, but so does comprehension. Measuring post-training proficiency improvement is one of the clearest signals that training is more than a box-checking exercise.

Third-party risk management is another critical area. Many organizations rely on external models, data providers, or AI-enabled vendors. If you do not understand how those tools are built, governed, and updated, you are importing risk without controls. Strong programs use standardized AI diligence questionnaires, assign assurance scores, and require contractual safeguards for high-risk vendors. A board-ready KPI here is the percentage of high-risk AI vendors subject to enhanced diligence and contractual controls.

Mergers and acquisitions deserve special attention. AI risk does not wait for post-close integration. The DOJ has been explicit that pre-acquisition diligence matters. A defensible KPI is simple and unforgiving. 100% of acquisition targets with material AI usage must undergo AI due diligence before closing. Anything less invites inherited risk.

Does the Program Work in Practice?

This is where many programs fail. Paper controls do not impress regulators. Outcomes do. Incident reporting is a critical signal. A low number of reported AI issues may indicate fear, confusion, or a lack of safety rather than safety concerns. What matters is whether issues are identified, investigated, and resolved promptly. Mean time to investigate is a powerful metric. If AI-related concerns take months to resolve, the program is not working. Clear escalation paths, defined investigation playbooks, and documented root cause analysis are essential.

Continuous monitoring is equally important. High-risk AI systems must be monitored for performance drift, data changes, and unintended outcomes. The DOJ expects companies to use data analytics to test whether controls are functioning. KPIs here include validation pass rates before deployment, drift-detection coverage for critical models, and corrective action closure rates. These are not technical vanity metrics. They are evidence of effectiveness.

Where NIST Fits and Why It Matters

The NIST AI Risk Management Framework does not compete with the ECCP. It operationalizes it. The ECCP tells you what regulators expect. NIST helps you implement those expectations across governance, mapping, measurement, and management. For example, ECCP risk assessment aligns with NIST’s mapping function. ECCP’s continuous improvement aligns with NIST’s measurement and management functions. Using NIST terminology creates a shared language across compliance, legal, security, and data science teams. That shared language is governance in action.

Reporting AI Risk to the Board

Boards do not want technical detail. They want assurance. The most effective AI governance dashboards focus on a small set of indicators that answer the DOJ’s three questions: coverage, quality, responsiveness, and learning. Examples include the percentage of AI assets risk-classified, validation pass rates, investigation cycle times, and corrective action closure rates. When these metrics move in the right direction, they tell a credible story of control. More importantly, they show that compliance is not reacting to AI. It is governing it.

Five Key Takeaways for Compliance Professionals

  1. AI as Risk. Artificial intelligence is already within the scope of the ECCP. If AI can influence business outcomes, it must be governed like any other compliance risk.
  2. Risk Management Program. A well-designed AI compliance program begins with complete asset identification and risk classification. Coverage metrics are the first signal regulators will examine.
  3. Implementation. Good faith implementation is demonstrated through role-based training, disciplined third-party oversight, and pre-acquisition AI diligence. Intent without execution does not count.
  4. Outcomes, not Inputs. Effectiveness is proven through outcomes. Investigation speed, monitoring coverage, and corrective action closure rates matter more than policy volume.
  5. Complementary. The NIST Framework complements the ECCP by providing an operating model that compliance, legal, and technical teams can share. Together, they turn principles into proof.

Final Thoughts

AI governance is not about predicting the future. It is about demonstrating discipline in the present. The DOJ is not asking compliance professionals to become data scientists. It is asking us to do what they have always done well: identify risk, establish controls, test effectiveness, and improve continuously. The ECCP already gives you the framework. The only question is applying it.

Categories
From the Editor's Desk

From the Editor’s Desk – Aaron Nicodemus on the CW AI Conference Insights: Navigating the Practical Use of AI in Compliance

In this episode of ‘From the Editor’s Desk,’ Tom Fox visits with Aaron Nicodemus to discuss highlights from the recent Compliance Week AI Conference. Key takeaways include the importance of understanding the purpose and practical use of AI tools before implementation, the pressures from C-suite and boards to adopt AI, and the necessity of a human-in-the-loop approach. The conversation also touches on integrating trust and integrity into AI adoption, the evolving role of compliance as a trusted partner in AI initiatives, and the collective willingness to learn and apply AI across compliance operations.

Key highlights:

  • Importance of Understanding AI Implementation
  • Pressure from the Top: Compliance and AI
  • Human Oversight in AI Processes
  • Trust and Integrity in AI
  • Compliance as a Competitive Advantage
  • Real-World Examples: Robinhood and DocuSign
  • The Evolving Role of Compliance in AI
  • Conference Vibes and Final Thoughts

Resources:

Aaron Nicodemus on LinkedIn

Compliance Week