Directors and AI: Do’s, Don’ts, and Compliance Lessons

Artificial intelligence (AI) has rapidly become embedded in the daily workflows of executives, employees, and, increasingly, board directors. From drafting strategy summaries to analyzing industry data, directors are turning to AI chatbots and transcription tools in the same way they once adopted email, spreadsheets, or virtual board portals. However, unlike those earlier technologies, AI presents new risks, and for directors, these risks intersect directly with fiduciary duties and corporate governance obligations.

A recent memorandum by Skadden, Arps, Slate, Meagher & Flom LLP, published through the Harvard Law School Forum on Corporate Governance, outlines practical dos and don’ts for directors using AI in their board roles. The message is clear: while AI offers great promise, directors must use it with caution. For compliance professionals, this guidance provides important lessons not only for boardrooms but also for the governance structures that surround them.

The Temptation of AI in the Boardroom

Boards are expected to absorb massive amounts of information, such as financial results, strategy papers, compliance reports, cybersecurity dashboards, and often under tight timelines. It is easy to see why a director might feed these materials into an AI tool to produce summaries or ask for red flags. Similarly, transcription services appear attractive for documenting complex board meetings and discussions. But here lies the trap: not all AI tools are created equal. Publicly available chatbots often train on user inputs, meaning that confidential board information could be incorporated into the system and potentially regurgitated to other users, including competitors.

Just as you would never allow directors to send board books through unsecured email, AI tools need guardrails.

Key Risks Identified in the Director’s Guide

The Skadden memorandum outlines several risks directors must consider when using AI in their corporate capacities:

  1. Confidentiality and Data Leakage – Uploading sensitive materials into public AI systems risks exposing trade secrets or personal data. Even if the information is deleted from a user’s history, the AI vendor may still retain and train on it.
  2. Discovery and Litigation Risks – AI chats are records. Like emails, they may be discoverable in litigation or regulatory reviews. Regulators could demand access to AI interactions if they involve matters under scrutiny, such as antitrust reviews of mergers and acquisitions (M&A) activity.
  3. Loss of Privilege – Using AI to transcribe board meetings or communications with counsel risks waiving attorney-client privilege. Once third parties have access, privilege may be lost forever.
  4. Accuracy and Hallucinations – AI outputs can be wrong, biased, or outdated. Treating AI results as authoritative without verification exposes directors to poor decision-making and potential breaches of fiduciary duties.
  5. Erosion of Human Judgment – Over-reliance on AI to make HR, strategy, or other critical decisions risks abdicating the duty of care and loyalty. Directors must remain firmly “in the loop”.

Compliance Lessons for Professionals

From these risks, we can distill key lessons for compliance officers advising boards and executives on AI governance.

1. Confidential Information Must Stay Inside the Perimeter

Compliance professionals should establish clear rules: no uploading of board materials, personal data, or trade secrets into public AI tools. Instead, direct the board to company-approved platforms that are vetted for security and configured to prevent training on sensitive inputs. This is not just a best practice; it may also be required to comply with contractual obligations, privacy laws, and internal data-protection policies.

2. Treat AI Chats as Discoverable Records

Boards should assume that anything shared with AI may one day be discoverable by others. Compliance professionals must include AI chats and transcripts in records-retention policies and advise directors to avoid discussing sensitive legal or competitive issues in public AI systems. This lesson mirrors earlier corporate missteps with text messages and messaging apps. AI is the new frontier for discoverability.

3. Preserve Privilege by Avoiding AI for Legal Matters

Directors must not use AI to record privileged discussions with counsel or board meetings, as this would violate the attorney-client privilege. Compliance officers should make this an explicit policy. Approved transcription tools may be used for training sessions or customer service calls, but never for board-level deliberations. Losing privilege could cripple a company’s defense in litigation. Compliance officers should hammer this home during board training.

4. Verify Before You Trust

AI has a well-documented tendency to “hallucinate.” Directors must be reminded: AI is not a single source of truth. Compliance programs should emphasize verification. Encourage directors to cross-check AI outputs against trusted sources and ensure management reviews AI-generated analyses before relying on them for decision-making.

5. AI Is a Tool, Not a Decision-Maker

The most important compliance lesson: AI augments but does not replace human judgment. Directors remain bound by duties of care and loyalty. Compliance professionals must make clear that delegating decision-making to AI tools could not only harm the company but also expose directors to personal liability.

Building a Compliance Framework for Board Use of AI

The Skadden guide closes by urging boards to develop clear policies for AI use, including approved tools, acceptable uses, and required disclosures. For compliance officers, this is an opportunity to lead.

Here are key framework elements to consider:

  • Approved Tools List – Maintain a list of AI platforms validated by IT and legal for security and compliance.
  • Acceptable Use Policy – Define when and how directors may use AI (e.g., industry research, summarizing public filings) versus prohibited uses (e.g., uploading board decks, transcribing meetings).
  • Training and Awareness – Provide directors with training on AI risks, including confidentiality, discoverability, and hallucinations.
  • Monitoring and Audit – Periodically review the use of AI by directors to ensure compliance with relevant policies and regulations.
  • Disclosure Requirements – Require directors to disclose if AI tools were used to generate or summarize board-related materials.

Final Thoughts

The “Do’s and Don’ts of Using AI” is a timely reminder: AI governance is not only about company-wide adoption. It also starts at the top, with the board itself. Directors tempted to use AI in their own roles face unique risks. These risks could compromise confidentiality, destroy privilege, or erode fiduciary oversight.

For compliance professionals, this presents an opportunity to serve as both educator and enforcer. Just as compliance led the charge on insider trading policies, conflicts of interest, and anti-bribery training, so too must we lead on AI governance.

The bottom line is that AI can be an extraordinary tool for directors. But without compliance guardrails, it can also be a governance trap. Our role is to ensure the boardroom and the company stay on the right side of that line.

Leave a Reply

Your email address will not be published. Required fields are marked *

What are you looking for?