Categories
Everything Compliance

Everything Compliance: Episode 160, The What Next Edition

Welcome to this Edition of award-winning Everything Compliance. In this episode, we have the complete quintet of Matt Kelly, Jonathan Marks, Jonathan Armstrong, Karen Woody, and Karen Moore, with Tom Fox, the Compliance Evangelist, sitting in as host.

  1. Matt Kelly looks at the doxing of corporate employees in the wake of the Charlie Kirk shooting. He shouts Boston Mayoral candidate Josh Craft, who bowed out of the race.
  2. Jonathan Marks delves into the details of a fraud risk analysis. He shouts out to Sheinelle Jones, all those who lost loved ones to cancer, and cancer victim caregivers.
  3. Jonathan Armstrong discusses the current problem of inadvertently hiring North Koreans. He shouts out to the Grand Ole Opry.
  4. Karen Moore delves deeply into accent bias. She rants about ABC and Disney’s decision to suspend Jimmy Kimmel.
  5. Karen Woody examines the President’s call to switch to semi-annual financial reporting, as opposed to quarterly. She shouts out to the Netflix show Adolescence, which swept the Emmys.
  6. Tom Fox shouts out the Community Foundation of the Hill Country, which took in over $100 million in donations for victims of the July 4 flood in just 30 days.

The members of Everything Compliance are:

The host, producer, and sometimes panelist of Everything Compliance is Tom Fox, the Voice of Compliance. He can be reached at tfox@tfoxlaw.com.  The award-winning Everything Compliance is a part of the Compliance Podcast Network.

Categories
It's art

It’s Art, Let’s Talk About It – The Artistic Journey of Kevin Macpherson: From Illustrator to Renowned Painter

The Museum of Western Art is dedicated to excellence in the collection, preservation, and promotion of Western Heritage and the education and cultural enrichment of our diverse audiences. The Museum serves as a bridge between the past and the present, ensuring that the legacy of the American West is preserved for future generations. Western Art is as engaging and important as ever. In this award-winning podcast series, Museum Executive Director Darrell Beauchamp welcomes Kevin Macpherson.  

They discuss Kevin’s long-standing friendship with Walt Gonski and his journey in the art world. Kevin shares his early beginnings, transitioning from an illustrator to a fine artist, and how his passion for landscape painting developed. They delve into the details of Kevin’s well-known ‘Pond Series,’ his teaching experiences, and the impact of global travels on his work. This episode provides an insightful look into Kevin’s career and his contributions to the art community. 

Highlights include:

  • Kevin Macpherson’s Early Art Journey
  • Life in Taos and Artistic Growth
  • The Pond Series
  • Journey as an Author
  • Advice for Aspiring Artists

Resources:

Museum of Western Art

Darrell Beauchamp on LinkedIn

Kevin Macpherson Fine Art Website

Categories
Compliance Tip of the Day

Compliance Tip of the Day – The Integrity Audit

Welcome to “Compliance Tip of the Day,” the podcast that brings you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements. Whether you’re a seasoned compliance professional or just starting your journey, our goal is to provide you with bite-sized, actionable tips to help you stay ahead in your compliance efforts. Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law. Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

This week, we have a 5-part series on audits adjacent to compliance, and today, we explore Part 4 and consider the Integrity Audit.

For more on this topic, check out The Compliance Handbook, a Guide to Operationalizing your Compliance Program, 6th edition, which was recently released by LexisNexis. It is available here.

Categories
Daily Compliance News

Daily Compliance News: September 25, 2025, The Learning a (Trump) Business Lesson Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, including compliance, ethics, risk management, leadership, or general interest, relevant to the compliance professional.

Top stories include:

  • Being a Yes Man for Trump can be bad for business. (Bloomberg)
  • Why do you need cyber breach insurance? (FT)
  • BOA resolves criminal investigation. (Reuters)
  • Indian workers often look to other countries for employment opportunities. (NYT)
Categories
AI Today in 5

AI Today in 5: September 25, 2025, The Red Lines for AI Edition

Welcome to AI Today in 5, the newest edition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI, so start your day, sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5, all from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest related to AI.

Top AI stories include:

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com.

Categories
Blog

Directors and AI: Do’s, Don’ts, and Compliance Lessons

Artificial intelligence (AI) has rapidly become embedded in the daily workflows of executives, employees, and, increasingly, board directors. From drafting strategy summaries to analyzing industry data, directors are turning to AI chatbots and transcription tools in the same way they once adopted email, spreadsheets, or virtual board portals. However, unlike those earlier technologies, AI presents new risks, and for directors, these risks intersect directly with fiduciary duties and corporate governance obligations.

A recent memorandum by Skadden, Arps, Slate, Meagher & Flom LLP, published through the Harvard Law School Forum on Corporate Governance, outlines practical dos and don’ts for directors using AI in their board roles. The message is clear: while AI offers great promise, directors must use it with caution. For compliance professionals, this guidance provides important lessons not only for boardrooms but also for the governance structures that surround them.

The Temptation of AI in the Boardroom

Boards are expected to absorb massive amounts of information, such as financial results, strategy papers, compliance reports, cybersecurity dashboards, and often under tight timelines. It is easy to see why a director might feed these materials into an AI tool to produce summaries or ask for red flags. Similarly, transcription services appear attractive for documenting complex board meetings and discussions. But here lies the trap: not all AI tools are created equal. Publicly available chatbots often train on user inputs, meaning that confidential board information could be incorporated into the system and potentially regurgitated to other users, including competitors.

Just as you would never allow directors to send board books through unsecured email, AI tools need guardrails.

Key Risks Identified in the Director’s Guide

The Skadden memorandum outlines several risks directors must consider when using AI in their corporate capacities:

  1. Confidentiality and Data Leakage – Uploading sensitive materials into public AI systems risks exposing trade secrets or personal data. Even if the information is deleted from a user’s history, the AI vendor may still retain and train on it.
  2. Discovery and Litigation Risks – AI chats are records. Like emails, they may be discoverable in litigation or regulatory reviews. Regulators could demand access to AI interactions if they involve matters under scrutiny, such as antitrust reviews of mergers and acquisitions (M&A) activity.
  3. Loss of Privilege – Using AI to transcribe board meetings or communications with counsel risks waiving attorney-client privilege. Once third parties have access, privilege may be lost forever.
  4. Accuracy and Hallucinations – AI outputs can be wrong, biased, or outdated. Treating AI results as authoritative without verification exposes directors to poor decision-making and potential breaches of fiduciary duties.
  5. Erosion of Human Judgment – Over-reliance on AI to make HR, strategy, or other critical decisions risks abdicating the duty of care and loyalty. Directors must remain firmly “in the loop”.

Compliance Lessons for Professionals

From these risks, we can distill key lessons for compliance officers advising boards and executives on AI governance.

1. Confidential Information Must Stay Inside the Perimeter

Compliance professionals should establish clear rules: no uploading of board materials, personal data, or trade secrets into public AI tools. Instead, direct the board to company-approved platforms that are vetted for security and configured to prevent training on sensitive inputs. This is not just a best practice; it may also be required to comply with contractual obligations, privacy laws, and internal data-protection policies.

2. Treat AI Chats as Discoverable Records

Boards should assume that anything shared with AI may one day be discoverable by others. Compliance professionals must include AI chats and transcripts in records-retention policies and advise directors to avoid discussing sensitive legal or competitive issues in public AI systems. This lesson mirrors earlier corporate missteps with text messages and messaging apps. AI is the new frontier for discoverability.

3. Preserve Privilege by Avoiding AI for Legal Matters

Directors must not use AI to record privileged discussions with counsel or board meetings, as this would violate the attorney-client privilege. Compliance officers should make this an explicit policy. Approved transcription tools may be used for training sessions or customer service calls, but never for board-level deliberations. Losing privilege could cripple a company’s defense in litigation. Compliance officers should hammer this home during board training.

4. Verify Before You Trust

AI has a well-documented tendency to “hallucinate.” Directors must be reminded: AI is not a single source of truth. Compliance programs should emphasize verification. Encourage directors to cross-check AI outputs against trusted sources and ensure management reviews AI-generated analyses before relying on them for decision-making.

5. AI Is a Tool, Not a Decision-Maker

The most important compliance lesson: AI augments but does not replace human judgment. Directors remain bound by duties of care and loyalty. Compliance professionals must make clear that delegating decision-making to AI tools could not only harm the company but also expose directors to personal liability.

Building a Compliance Framework for Board Use of AI

The Skadden guide closes by urging boards to develop clear policies for AI use, including approved tools, acceptable uses, and required disclosures. For compliance officers, this is an opportunity to lead.

Here are key framework elements to consider:

  • Approved Tools List – Maintain a list of AI platforms validated by IT and legal for security and compliance.
  • Acceptable Use Policy – Define when and how directors may use AI (e.g., industry research, summarizing public filings) versus prohibited uses (e.g., uploading board decks, transcribing meetings).
  • Training and Awareness – Provide directors with training on AI risks, including confidentiality, discoverability, and hallucinations.
  • Monitoring and Audit – Periodically review the use of AI by directors to ensure compliance with relevant policies and regulations.
  • Disclosure Requirements – Require directors to disclose if AI tools were used to generate or summarize board-related materials.

Final Thoughts

The “Do’s and Don’ts of Using AI” is a timely reminder: AI governance is not only about company-wide adoption. It also starts at the top, with the board itself. Directors tempted to use AI in their own roles face unique risks. These risks could compromise confidentiality, destroy privilege, or erode fiduciary oversight.

For compliance professionals, this presents an opportunity to serve as both educator and enforcer. Just as compliance led the charge on insider trading policies, conflicts of interest, and anti-bribery training, so too must we lead on AI governance.

The bottom line is that AI can be an extraordinary tool for directors. But without compliance guardrails, it can also be a governance trap. Our role is to ensure the boardroom and the company stay on the right side of that line.