Categories
Blog

Board Oversight and Accountability in AI: Where Governance Begins

For boards and Chief Compliance Officers, AI governance does not begin with the model. It begins with oversight, accountability, and the discipline to define who owns risk, who makes decisions, and who answers when something goes wrong. If AI is changing how companies operate, then board governance and compliance leadership must change as well.

In the first article in this series, I laid out the five significant corporate governance challenges around artificial intelligence: board oversight and accountability, strategy outrunning governance, data governance and model integrity, ongoing monitoring, and culture and speak-up. In Part 2, I turn to the first and most foundational issue: board oversight and accountability.

This is where every AI governance program either starts with rigor or begins with ambiguity. And ambiguity, in governance, is rarely neutral. It is usually the breeding ground for failure.

There is a tendency in some organizations to treat AI oversight as a natural extension of technology oversight. That is too narrow. AI touches legal exposure, regulatory risk, data governance, privacy, discrimination concerns, intellectual property, operational resilience, internal controls, and corporate culture. That makes AI a board-level and CCO-level issue, not just a CIO issue.

The central governance question is straightforward: who is responsible for AI risk, and how is that responsibility exercised in practice? If the board cannot answer that question, if management cannot explain it, and if the compliance function is not part of the answer, then the company does not yet have credible AI governance.

Why Board Oversight Matters Now

Boards have always been expected to oversee enterprise risk. What has changed with AI is the speed, scale, and opacity of the risks involved. A business process can be altered quickly by a generative AI tool. A model can influence customer interactions, internal decisions, and external communications at scale. Employees can adopt AI capabilities before governance structures are fully formed. Vendors can embed AI inside products and services without management fully understanding the downstream implications. That is why AI cannot be governed informally. It requires deliberate oversight.

The board does not need to manage models line by line. That is not its role. But the board must ensure that management has established a governance structure capable of identifying AI use cases, classifying risk, escalating significant issues, testing controls, and reporting failures. Just as important, the board must know who inside management is accountable for making that system work.

This is where the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) offers a very practical lens. The ECCP asks whether a compliance program is well designed, adequately resourced, empowered to function effectively, and tested in practice. Those four questions are equally powerful in evaluating AI governance. Is the governance structure well designed? Is it resourced? Is the compliance function empowered in AI decision-making? Is the program working in practice? If the answer to any of those questions is uncertain, the board should treat that uncertainty as a governance gap.

Accountability Begins with Ownership

One of the oldest problems in corporate governance is fragmented responsibility. AI only intensifies that risk. Consider the typical organizational landscape. IT may own its own infrastructure. Legal may review contracts and liability. Privacy may address data use. Security may focus on cyber threats. Risk may handle enterprise frameworks. Compliance may address policy, controls, investigations, and reporting. Business leaders may champion the use case. Internal audit may come in later for assurance. The board, meanwhile, receives updates from multiple directions.

Without a clearly defined operating model, this becomes a classic accountability fog. Everyone has a slice of the issue, but no one owns the whole risk. A more disciplined approach requires naming an accountable executive owner for enterprise AI governance; in some companies, that may be the Chief Risk Officer. In others, it may be a Chief Legal Officer, Chief Compliance Officer, or a designated senior executive with cross-functional authority. The title matters less than the clarity. The organization must know who convenes the process, who resolves conflicts, who signs off on high-risk use cases, and who reports upward to the board.

For the CCO, this does not mean taking sole ownership of AI. That would be unrealistic and unwise. But it does mean insisting that compliance has a defined role in the governance architecture. AI raises issues of policy adherence, training, escalation, investigations, third-party risk, disciplinary consistency, and remediation. Those are core compliance issues. A governance model that sidelines the CCO is not merely incomplete; it is unstable.

The Right Committee Structure

Once ownership is established, the next question is structural: where does AI governance live? The answer should be enterprise-wide, but with a defined committee architecture. Companies need at least two governance layers.

The first is a management-level AI governance committee or council. This should be a cross-functional working body with representation from compliance, legal, privacy, security, technology, risk, internal audit, and relevant business units, as appropriate. Its purpose is operational governance. It reviews proposed use cases, classifies risk levels, evaluates controls, addresses issues, and determines escalation.

The second is a board-level oversight mechanism. This does not always require a new standing AI committee. In some organizations, oversight may sit with the audit committee, risk committee, technology committee, or full board, depending on the company’s structure and maturity. What matters is not the name of the committee. What matters is that there is an identified board body with responsibility for overseeing AI governance and receiving regular reporting.

This is consistent with the NIST AI Risk Management Framework, which begins with the “Govern” function. NIST recognizes that governance is not an afterthought; it is the foundation that enables the rest of the risk management lifecycle. ISO/IEC 42001 similarly reinforces that AI governance must be embedded in a management system with defined roles, controls, review mechanisms, and continuous improvement. Both frameworks point in the same direction: AI governance requires structure, not aspiration.

Reporting Lines That Actually Work

Good governance lives or dies by reporting lines. If information cannot move efficiently upward, then oversight will be stale, filtered, or incomplete. Boards should require periodic reporting on several core areas: the current AI inventory, high-risk use cases, incident trends, control exceptions, third-party AI dependencies, regulatory developments, and remediation status. The board does not need a data dump. It needs decision-useful reporting.

That means management should create a formal reporting cadence. Quarterly reporting is sufficient for many organizations, but high-risk environments require more frequent updates. The reporting should identify not only what has been approved, but what has changed. That includes scope changes, incidents, near misses, new vendors, policy exceptions, and any material concerns raised by employees, customers, or regulators.

The CCO should be part of the reporting chain, not a bystander. A balanced governance model allows compliance to elevate concerns independently if necessary, particularly when a business leader is pushing to move faster than controls will support. That is not an obstruction. That is governance doing its job.

Escalation Protocols: The Missing Middle

Many companies have approval procedures, but far fewer have robust escalation protocols. That is a mistake. Governance fails only when there is no structure. It also fails when there is no clear path for handling edge cases, incidents, or disagreements.

An effective AI governance program should specify escalation triggers. For example, a use case should be escalated when it affects employment decisions, consumer rights, regulated communications, financial reporting, sensitive personal data, or legally significant outcomes. Escalation should also occur when there is evidence of model drift, hallucinations in a material context, unexplained bias, control failure, a third-party vendor issue, or a credible employee concern.

These triggers should not live in someone’s head. They should be documented in policy, operating procedures, or a risk classification matrix. There should also be a defined process for who gets notified, what interim controls are applied, whether deployment pauses are available, and how issues are documented for follow-up.

This is another place where the ECCP remains highly relevant. DOJ prosecutors routinely ask whether issues are escalated appropriately, whether investigations are timely, and whether lessons learned are incorporated into the program. AI governance should be built with the same operational seriousness. If an issue arises, the company should not be improvising its governance response in real time.

Documentation Is Evidence of Governance

One of the great compliance truths is that governance without documentation is hard to prove and harder to sustain. For AI governance, documentation should include at least these categories: use case inventories, risk classifications, approval memos, committee minutes, control requirements, incident logs, training records, validation summaries, escalation decisions, and remediation actions. This is not paperwork for its own sake. It is the evidentiary trail that shows the organization is governing AI thoughtfully and consistently.

Boards should care about this because documentation is what allows oversight to be more than anecdotal. It is also what allows internal audit, regulators, and investigators to assess whether the governance program is functioning.

For the CCO, documentation is particularly important because it connects AI oversight to the larger compliance architecture. It helps align AI governance with policy management, training, investigations, speak-up systems, third-party due diligence, and corrective action tracking. In other words, it turns AI governance from a loose collection of meetings into a defensible management process.

Board Practice and CCO Practice Must Meet in the Middle

The best AI governance models do not pit the board and the compliance function against innovation. They create a structure that allows innovation to move, but only within defined guardrails. Boards should ask sharper questions. Who owns AI governance? What committee reviews high-risk use cases? What issues must be escalated? What reporting do we receive? How are incidents tracked and remediated? What role does compliance play?

CCOs should be equally direct. Where does compliance sit in the approval process? How do employees report AI concerns? What documentation is required? When can compliance elevate an issue on its own? How are lessons learned being fed back into policy and training?

This is the practical heart of the matter. Oversight is not a slogan. Accountability is not a press release. Both must be built into reporting lines, committee design, escalation protocols, and documentation discipline.

AI governance begins here because every other issue in this series depends on it. If oversight is weak and accountability is blurred, strategy will outrun governance, data issues will go unnoticed, monitoring will become inconsistent, and culture will not carry the load. But if the board and CCO get this first issue right, they create the governance spine that the rest of the program can rely on.

Join us tomorrow, where we review the rule of data governance in AI governance, because that is where every effective AI governance program either starts strong or starts to fail.

Categories
Innovation in Compliance

Innovation in Compliance: Dr. Rohan Lall: Innovation, Clinical Evidence, and Compliance in Electrifying Spine Surgery

Innovation occurs across many areas, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, host Tom Fox visits with Dr. Rohan Lall, a clinically trained Neurological Surgeon and Chief Medical Officer of SynerFuse, about innovation in spine surgery and the compliance infrastructure needed to support it.

Dr. Lall Law explains TLIF (transforaminal lumbar interbody fusion) and ETLIF, which integrates direct nerve root stimulation into reconstructive spine surgery to address persistent pain from chronically injured nerves even after decompression and fusion. Dr. Lall describes the innovation as team-driven, highlighting collaboration and detailing the regulatory path for a novel Class III device, including a feasibility proof-of-concept study, third-party data management, and an independent data and safety monitoring board. Dr. Lall outlines how compliance leaders should align with business speed while managing FDA requirements, data integrity, ethics, and risk, and he notes future impacts from neuromodulation, robotics, and image guidance.

Key highlights:

  • Back Surgery Basics and Electrified TLIF Explained
  • Innovation Origin Story
  • Regulatory and Collaboration Hurdles
  • Clinical Trials and Data Integrity
  • How Compliance Can Help Innovators

Resources:

Dr. Rohan Lall on LinkedIn

Synerfuse Company Website

Innovation in Compliance is a multi-award-winning podcast that was recently ranked Number 4 in Risk Management by 1,000,000 Podcasts.

Categories
AI Today in 5

AI Today in 5: April 7, 2026, The AI Governance Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. AI for auditing. (FT)
  2. AI is creating compliance gaps in the mortgage industry. (National Mortgage Professional)
  3. AI-enabled compliance reduces healthcare risks. (The Palm Beach Post)
  4. AI issues in the workplace. (Mintz)
  5. Compliance priorities are shifting towards AI governance. (BDO)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Daily Compliance News

Daily Compliance News: April 7, 2026, The Corporate Retreat from Hell Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • AI in auditing. (FT)
  • Trump to cut 9400 TSA positions. (Reuters)
  • Germany uncovers €300 payments scandal. (Bloomberg)
  • When a corporate retreat goes wrong, very wrong. (WSJ)
Categories
Blog

Five Corporate Governance Challenges in AI: A Roadmap for CCOs and Boards

AI is not simply a technology deployment question. It is a corporate governance challenge that requires board attention, compliance discipline, and operational oversight. For Chief Compliance Officers and board members, the task is not merely to encourage innovation, but to ensure that innovation is governed, monitored, and aligned with business values and risk tolerance.

Artificial intelligence has moved from pilot projects and innovation labs into the bloodstream of the modern corporation. It now touches customer service, finance, procurement, HR, sales, third-party management, internal reporting, and strategic decision-making. That expansion is why AI can no longer be treated as a narrow IT issue. It is a governance issue. More particularly, it is a governance issue with compliance implications at every lifecycle stage.

For compliance professionals, that means AI is not simply about whether a model works. It is about whether the organization has built the structures, accountability, and culture to use AI responsibly. For boards, it means AI oversight can no longer be delegated away with a cursory quarterly update. The board must understand not only where AI is being used, but whether the company’s governance architecture is fit for purpose.

This is the first post in a series examining the five most important corporate governance issues around AI. They are not exotic or theoretical. They are the same types of governance challenges compliance professionals have seen before in other contexts: ownership, control design, data integrity, monitoring, and culture. AI raises the stakes and accelerates the timeline.

1. Board Oversight and Accountability

The first challenge is the most fundamental: who is actually in charge?

One of the great failures in governance is diffuse accountability. When everyone has some responsibility, no one has real responsibility. AI governance suffers from this problem in many organizations. Legal is concerned about liability. IT is focused on systems. Security is focused on cyber risk. Privacy is focused on data usage. Compliance is focused on controls and conduct. Business leaders are focused on speed and competitive advantage. The board hears fragments from all of them, but may not receive a coherent picture.

That is a dangerous place to be. AI governance begins with clear ownership. The board should know who is accountable for enterprise AI governance, how decisions are escalated, and how high-risk use cases are reviewed. A company does not need bureaucracy for its own sake, but it does need clarity.

This is where the Department of Justice’s Evaluation of Corporate Compliance Programs remains instructive, even if AI is not its exclusive focus. The ECCP repeatedly asks whether compliance is well designed, adequately resourced, empowered to function effectively, and tested in practice. Those same questions apply directly to AI governance. If accountability for AI is vague, if compliance is not in the room, or if oversight is not documented, governance will be performative rather than operational.

2. Strategy Outrunning Governance

The second challenge is one many companies know all too well: innovation is sprinting ahead while governance is still tying its shoes.

Business teams are under enormous pressure to deploy AI quickly. Senior leadership hears daily that AI can deliver efficiency, productivity, growth, and competitive advantage. Vendors promise transformation. Employees experiment informally. In that environment, governance can be cast as friction.

But good governance is not the enemy of innovation. It is what keeps innovation from becoming unmanaged exposure.

The central question here is simple: has the company defined the rules of the road before putting AI into production? In practical terms, has it determined which use cases are permissible, which require enhanced review, which are prohibited, and which must go to the board or a designated committee? Has it established approval criteria, documentation standards, and stop/go decision points?

The NIST AI Risk Management Framework is especially helpful on this point because it treats AI governance as an ongoing management discipline rather than a one-time sign-off. Its emphasis on Govern, Map, Measure, and Manage is a powerful reminder that strategy and governance must move together. ISO/IEC 42001 brings similar discipline by framing AI management systems around structure, accountability, controls, and continual improvement.

The lesson for compliance professionals is clear: if the business has a faster process for buying or launching AI than for reviewing risks and governance, it has already fallen behind.

3. Data Governance, Privacy, and Model Integrity

The third challenge is the quality and integrity of what goes into, and comes out of, AI systems.

AI does not operate in a vacuum. It depends on data, assumptions, training inputs, prompts, workflows, and human interaction. That means weaknesses in data governance are not side issues. They are central governance risks. Poor data lineage, unvalidated data sources, confidentiality breaches, inadequate access controls, and bias in training data can all create downstream failures that become legal, reputational, regulatory, and operational events.

For boards, the temptation is to hear “AI” and think about futuristic questions. But the more immediate concern is often much more familiar. Does management know where the data came from? Does the company understand whether sensitive or proprietary information is being exposed? Are outputs accurate enough for the intended use? Are the controls around data usage consistent with privacy obligations and internal policy?

This is where AI governance intersects with traditional compliance disciplines in a very real way. Privacy, information governance, records management, cybersecurity, and internal controls all converge here. A system that produces impressive outputs but relies on flawed or unauthorized data is not a governance success. It is a governance failure waiting to be discovered.

ISO 42001 is particularly useful because it forces organizations to think in systems terms. It is not merely about the model itself; it is about the management environment surrounding it. That is exactly how boards and CCOs should think about model integrity.

4. Ongoing Monitoring and the “Day Two” Problem

The fourth challenge is the one that too many organizations underestimate: governance after deployment. A great many companies put substantial effort into approving an AI use case, but far less into monitoring it once it is live. Yet this is where some of the greatest risks emerge. Models drift. Employees use tools for new purposes. Controls that looked solid on paper weaken in practice. Reviewers become overloaded. Risk profiles change. Regulators evolve their expectations. The use case expands far beyond its original design.

That is why AI governance must include what I call the “Day Two” problem. What happens after launch? This is once again a place where the ECCP offers a useful lens. The DOJ does not ask merely whether a policy exists. It asks whether it works in practice, whether it is tested, and whether lessons learned are incorporated back into the program. AI governance should be held to the same standard. If the company has no way to monitor performance, investigate anomalies, log incidents, revalidate assumptions, or update controls, then it lacks effective AI governance. It has an approval memo.

The board should be asking for reporting that goes beyond usage metrics or efficiency gains. It should want to know about incidents, exception trends, control failures, validation results, and remediation efforts. In other words, governance must be dynamic because AI risk is dynamic.

5. Culture, Speak-Up, and Human Judgment

The fifth challenge may be the most overlooked, yet it is often the earliest warning system a company has: culture. Employees will usually see AI failures before leadership does. They will spot the odd output, the customer complaint, the biased result, the misuse of a tool, the shortcut around a control, or the inaccurate summary that could trigger a bad decision. The question is whether they will say something.

This is why AI governance is not solely about structure and policy. It is also about whether the organization has a culture that encourages people to raise concerns. Do employees understand that AI-related problems are reportable? Do they know where to raise them? Are managers trained to respond properly? Are anti-retaliation protections reinforced in this context?

Human judgment also matters because AI does not eliminate accountability. If anything, it heightens the need for judgment. A machine-generated output can create a false sense of confidence, especially when it arrives quickly and sounds authoritative. Boards and CCOs must resist that temptation. Human oversight is not a ceremonial step. It is an essential governance control.

The strongest AI governance programs will be the ones that connect structure with culture. They will not merely create committees and frameworks. They will create an environment where people trust the system enough to challenge it.

The Governance Road Ahead

For CCOs and boards, the governance challenge around AI is not mysterious. It is demanding, but it is not mysterious. The questions are recognizable. Who owns it? What are the rules? Can we trust the data? Are we monitoring the system over time? Will people speak up when something goes wrong?

These five issues form the roadmap for the series ahead. In the coming posts, I will take up each one in turn and explore what it means in practice for modern compliance programs and board oversight. Because if there is one lesson here, it is this: AI governance is not about admiring the technology. It is about governing the enterprise that uses it.

Join us tomorrow, where we review board oversight and accountability, because that is where every effective AI governance program either starts strong or starts to fail. 

Categories
The PfBCon Podcast

The PFBCon Podcast: Back to the Future of Podcasting: The Audio–Video Convergence with Rob Greenlee

Rob Greenlee discusses podcasting’s evolution from early audio roots to an initial wave of video podcasts, a decade-long industry emphasis on audio influenced by shows like Serial and radio’s entry, and today’s renewed convergence of audio and video driven by audience preferences and platform dynamics. He notes the video’s long-standing presence on devices like Microsoft’s Zune and parallels with YouTube’s proprietary model, which have fueled ongoing debate over what defines a podcast. 

Greenlee highlights the challenges creators face in balancing high-quality audio with compelling video, the complexities of multi-format distribution, and the value of optimizing separate audio and video versions rather than simply extracting audio from video. He also points to renewed TV-like distribution trends and emphasizes community-building and engagement-focused monetization beyond raw download numbers.

Key highlights:

  • Podcasting has come full circle.
  • Early video podcasting was more important than many remember.
  • YouTube has redefined podcasting.
  • The future belongs to hybrid creators.
  • Workflow pressure is now one of the biggest challenges for creators.
  • Distribution strategy matters more than ever.
  • Every platform needs its own version of the content.
  • Engagement is becoming the new currency of advertising.

Resources:

Follow Rob Greenlee on:

Personal Website

New Media Show Live

Trust Factor with Rob Greenlee

Spoken Life Podcast

Adore Network

YouTube

Instagram

LinkedIn

Facebook

X(formerly Twitter)

Categories
The Ethics Experts

Episode 247 – Carla Wilson

In this episode of The Ethics Experts, Nick Gallo welcomes Carla Wilson

Carla is an experienced compliance and ethics leader with a strong track record in managing complex programs, including a DOJ FCPA investigation and monitorship. She has worked with US-headquartered companies such as Stericycle and Koch Industries, as well as global organizations like BP, helping them build robust compliance frameworks and navigate complex regulatory environments.

Having lived and worked in five countries, Carla brings a truly global perspective to compliance. She is passionate about fostering a culture of integrity and designing practical, risk-based solutions that support both business objectives and compliance goals.

She will shortly be stepping into a new leadership role, where she will continue to build and evolve high-impact compliance programs.

Connect with Carla on LinkedIn

Categories
Daily Compliance News

Daily Compliance News: April 6, 2026, The CSI Uncovered Corruption Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Corruption in Croatian skiing. (SkiRacing)
  • How a CSI uncovered SC police corruption. (WSJ)
  • What’s next for FirstEnergy execs? (Cleveland)
  • Russian corruption is fueling massive troop losses. (PBS News Hour)
Categories
FCPA Compliance Report

FCPA Compliance Report: From DOJ’s 7 Compliance Pillars to Sentencing Mitigation: Joseph De Gregorio’s Compliance Rebuild Framework – Part 1

In this episode, Tom Fox welcomes former Wall Street trader Joseph De Gregorio, who was federally convicted and now applies a “compliance rebuild” methodology to demonstrate genuine remediation under legal scrutiny. This is Part 1 of a two-part podcast series.

Using the Matthew Bowyer illegal sports betting case, Joseph explains the federal pre-sentence interview and pre-sentence report (PSR) process, emphasizing that the probation officer’s credibility assessment and PSR narrative heavily influence sentencing and downstream treatment across the federal system. He describes submitting a 3,500-word personal narrative before the PSR interview, which was attached in full and cited by the judge as mitigation, resulting in a one-year-and-a-day sentence rather than the government’s four-year request. Joseph maps DOJ’s seven corporate compliance program dimensions to individuals via a personal compliance manual, independent accountability structure, credentialed education, verifiable monitoring, documented transparency, voluntary discipline actions, and a post-sentencing continuous improvement plan centered on victims-first accountability.

Key highlights:

  • Joseph’s Wall Street Past
  • The Boyer Betting Case
  • What is a PSR, and why does it drive sentencing
  • Preparing for the Interview
  • From Corporations to Individuals
  • Seven Pillars Framework

Resources:

Joseph De Gregorio – Founder, JN Advisor™ Maximum Sentence Reduction – Minimum Time Served

📋 Initial Consultation: https://forms.gle/2fLczk7bbwM7KSaP6

Bloomberg Law Contributor: “How to Get a Judge to Reduce Your Client’s White-Collar Sentence” – Bloomberg Law 

Bloomberg Tax Contributor: Tax Fraud Sentencing Has a Gap Defense Attorneys Are Missing

Featured Expert: American Bar Association

Featured Sentencing Mitigation Expert: Law360

Featured Expert on Us Weekly with 5x Emmy Award Winning Journalist Kristin Thorne for her “Uncovered” Series Click Link For Full Video

https://www.usmagazine.com/crime-news/news/federal-sentencing-strategist-reveals-why-some-real-housewives-stars-commit-fraud/

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Interested in the intersection of Sherlock Holmes and modern compliance? Check out my latest book, The Game is Afoot in Compliance.

Categories
AI Today in 5

AI Today in 5: April 6, 2026, The AI in Healthcare Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. AI risks for auto lenders. (AutoNews)
  2. Moving beyond AI pilots. (Boston University)
  3. AI readiness and legal compliance. (ITPro)
  4. Banks must test AI beyond legal thresholds. (QAFinancial)
  5. AI in healthcare. (FoxNews)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.