Categories
Blog

AI Governance and Fiduciary Duty: Board Oversight of AI As Core Governance

There was a time when boards could treat AI as a management-side innovation issue, something for the technology team, the innovation committee, or perhaps an occasional strategy offsite. That time is ending. No longer. For every compliance professional, AI stops being a technology story and becomes a governance story. And once it becomes a governance story, boards need to pay attention through the lens they know best: fiduciary duty.

The issue is not whether every director needs to become an engineer. They do not. The issue is whether the board is exercising appropriate oversight over a capability that can materially affect legal exposure, operational resilience, internal controls, reputation, and enterprise value. Under that lens, ignoring AI oversight begins to look less like prudence and more like a governance gap.

The Board Question Is No Longer “Do We Use AI?”

Too many board discussions still start in the wrong place. A director asks, “Are we using AI?” Management says yes, in a handful of pilots. Another director asks whether there is a policy. Legal says yes, one is being drafted. Everyone nods, reassured that the matter is under control. That is not oversight. That is atmospherics.

The real board questions are different. Where is AI being used? What decisions does it influence? What data does it rely on? Who owns it? How is risk assessed? What controls are in place? What gets reported upward when something changes or goes wrong?

COSO’s GenAI guidance is quite direct on this point. It states that the board of directors must have visibility into GenAI use and associated risks, including regular reporting on adoption, key risk indicators, incidents, and material changes to high-impact use cases. It also says oversight bodies should have the capacity to challenge assumptions, request independent validation, and direct corrective action.

Fiduciary Duty Means Oversight, Not Technical Mastery

The fiduciary duty standard is more practical and more familiar. Directors are expected to exercise informed oversight over material risk. If AI is shaping material processes, material decisions, or material exposures, then the board should be asking how management is governing it and what evidence supports that confidence.

This is where compliance can be a true translator. We understand how to connect abstract governance expectations to operational proof. We know the difference between having a policy and having a control. We know that a dashboard without escalation is theater. We know that a pilot without documentation is an anecdote. And we know that “the business owns it” is not enough unless ownership is defined, trained, monitored, and accountable.

COSO again gives a helpful framework. It emphasizes clear ownership of each GenAI tool, platform, or capability, with defined authority, escalation paths, and documented scope of use. It further stresses that assigning ownership without capability invites failure, and that accountability should be tied not only to adoption but also to accuracy, safety, compliance, and adherence to controls. Boards do not need to run AI. But they do need assurance that someone competent owns it and that the ownership model is real.

Why AI Oversight Is Different from Ordinary IT Oversight

Some directors may be tempted to ask whether this is simply another version of cybersecurity or digital transformation oversight. There is overlap, certainly, but AI presents a different governance profile. COSO notes several characteristics that make GenAI distinct. It is dynamic: models, prompts, and retrieval data can change frequently, requiring continuous risk assessment, change control, and monitoring. It is easily scalable, which means it can scale errors and bias as readily as it scales efficiency. It has a low barrier to entry, which increases the risk of shadow AI and ungoverned adoption. And critically, it can be confidently wrong.

That last point is especially important for boards. A broken machine usually signals that it is broken. AI often does the opposite. It produces polished, persuasive, and highly plausible output even when it is materially mistaken. That means traditional management confidence can be a weak proxy for actual reliability. Boards therefore need a different kind of assurance model, one that asks not only whether the system is in place, but whether the organization can validate outputs, explain limitations, monitor drift, and intervene when use cases expand beyond what was originally approved.

The Governance Gap Boards Must Avoid

Here is where the fiduciary-duty lens becomes especially useful. The governance failure in the AI era is unlikely to be that a board never heard the term “AI.” Every board in America has heard it. The failure is more likely to be subtler and therefore more dangerous: the board heard about AI in broad strategic terms but never built a repeatable oversight mechanism around it.

That is the governance gap.

It shows up when management reports adoption but not risk classification.
It shows up when directors hear about productivity gains but not control failures.
It shows up when there is an AI policy but no inventory of use cases.
It shows up when there is enthusiasm about innovation but no discussion of third-party dependencies, data quality, escalation paths, or human review.
It shows up when incidents are handled ad hoc rather than through a defined reporting structure.

COSO warns that rapid iteration can outpace existing processes and that prompts, thresholds, and retrieval connectors are critical configuration elements requiring the same rigor as other controlled system settings. It also highlights third-party and vendor risk, noting that outsourced GenAI capabilities can limit visibility into training data, model updates, data handling, and underlying controls.

In other words, the board should not assume AI risk is contained simply because a vendor is involved or because the tool sits inside a familiar enterprise platform. If anything, that should sharpen the oversight question.

What Good Board Oversight Looks Like

The good news is that effective AI oversight is not mystical. It looks a great deal like good oversight in other high-risk areas. It is structured, periodic, evidence-based, and tied to accountability. At a minimum, boards should expect management to provide five things.

  1. An inventory of material AI use cases, categorized by risk and business impact.
  2. A governance structure that identifies owners, review forums, escalation paths, and the role of compliance, legal, risk, audit, and technology.
  3. Clear policies and boundaries around acceptable use, prohibited data, high-impact decisions, and when human review is mandatory.
  4. Meaningful reporting. Not just adoption statistics, but risk indicators, incidents, model or vendor changes, validation results, and material control exceptions.
  5. A remediation and monitoring process that reflects the dynamic nature of AI.

That is consistent with COSO’s broader framework, which stresses alignment with organizational goals and risk appetite, the use of relevant information, internal communication, ongoing evaluations, and the communication of deficiencies. This is where I would encourage boards to think less in terms of “AI briefings” and more in terms of “AI oversight cadence.” A one-time presentation is not governance. A recurring structure is.

The Board Does Not Need More Hype. It Needs Evidence.

One of the risks in the current market is that AI discussions are still drenched in promotional language. Faster. Smarter. More innovative. Transformational. Useful words, perhaps, but not enough for a board discharging fiduciary obligations.

Boards need evidence. This is where the compliance function can shine. Compliance professionals know how to convert aspiration into evidence. We know how to build a record showing that oversight is not merely claimed, but exercised.

And make no mistake, documentation matters. Structured communication and clear records are essential for reconstructing decisions, demonstrating accountability, and supporting regulatory or audit review. That principle runs through effective compliance practice generally and becomes even more important in AI governance, where organizations must often explain not only what decision was made, but how the process was overseen.

Five Questions Every Board Should Ask Now

If I were advising a board chair or audit committee chair, I would start with five questions.

  1. What are our highest-risk AI use cases, and who owns each one?
  2. What information does the board receive regularly about AI adoption, incidents, and material changes?
  3. How do we know management is validating AI outputs and not simply trusting them?
  4. Where are third-party AI tools embedded in our environment, and what visibility do we have into those risks?
  5. What evidence would we produce tomorrow if a regulator, auditor, or shareholder asked how this board oversees AI?

Those questions do not require the board to become technical. They require the board to become disciplined.

The Bottom Line

AI governance is moving quickly from optional good practice to expected governance hygiene. That is the real message boards need to hear. Under a fiduciary-duty lens, the challenge is straightforward. Directors do not need to be AI developers. But they do need to ensure that management has built a credible system for identifying, governing, monitoring, and escalating AI risk. When AI touches material business processes, board silence is not neutrality. It is exposure.

The companies that get this right will not be the ones that talk most loudly about innovation. They will be the ones whose boards insist on visibility, accountability, evidence, and follow-through. That is not anti-innovation. That is governance doing its job.