Categories
AI in Healthcare

AI in Healthcare: Five Healthcare AI Stories You Need to Know This Week – April 3, 2026

Welcome to AI in Healthcare in 5 Stories. This podcast is a Weekly Briefing of the five most important AI developments shaping healthcare, medicine, and life sciences. Each week, Tom Fox breaks down the latest stories in clinical innovation, regulation, privacy, compliance, patient safety, and operational transformation through a practical, business-focused lens. Designed for healthcare compliance professionals, executives, legal teams, clinicians, and industry leaders, the podcast moves beyond headlines to explain what each development means in the real world.

The top five stories for the week ending April 3, 2026, include:

  1. Writing prescriptions over the phone using AI. (WSBT)
  2. Patients with medical mysteries are headed to AI for research. (NYT)
  3. How well does AI tech work in healthcare? (Technology Review)
  4. Where is AI in healthcare headed? (Futurism)
  5. AI’s healthcare test. (Inc42)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com.

Categories
AI in Financial Services in 5 Stories

AI in Financial Services in 5 Stories – Week Ending April 3, 2026

Welcome to AI in Financial Services in 5 Stories. A practical weekly roundup of the five most important AI developments affecting banking, insurance, payments, asset management, and fintech. Each Friday, Tom Fox will break down the top stories that matter most through the lenses of compliance, risk management, governance, and business strategy. Designed for compliance professionals, executives, legal teams, and financial services leaders, it goes beyond headlines to explain why each development matters in a highly regulated industry. The result is a concise weekly briefing that helps listeners stay current on AI innovation while asking sharper questions about oversight, accountability, and trust.

This week’s stories include:

  1. Thinking about AI from the bottom up. (FintechFutures)
  2. The AI fintech market in 2033. (Futurism)
  3. Learning to say no for AI. (FinTech Global)
  4. AI is changing how Saas products for tech are designed. (FinTech Global)
  5. SoftBank is betting everything on AI. What could go wrong? (FinTech Weekly)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
AI Today in 5

AI Today in 5: April 2, 2026, The Just Say No Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Responsible AI in the regulatory framework. (Wealth Management)
  2. HHS moves AI in healthcare oversight. (GovInfo Security)
  3. Creating an AI Incident and Response Plan. (NationalReview)
  4. Where is AI in healthcare headed? (Futurism)
  5. Saying No in GenAI projects. (FinTechGlobal)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Blog

AI Risk Appetite: The Conversation Boards Are Not Having

There is a quiet but serious problem developing in boardrooms around AI. Directors are hearing about innovation. They are hearing about productivity gains. They are hearing about competitive pressure, transformation, and speed. What they are not hearing enough about is risk appetite. That is the missing conversation.

Most companies are already using AI in one form or another. Some are deploying enterprise tools. Some are approving vendor solutions with embedded AI. Some are allowing business units to experiment in a controlled fashion. Some, of course, are doing all of the above and pretending it is a strategy. Yet for all the discussion about adoption, there has been far less focus on a basic governance question: what level of AI-driven decision risk is acceptable for this company? That is not a technical question. It is a board question.

The Risk Appetite Gap in AI Governance

AI is not simply another software purchase. It can influence recommendations, rankings, forecasts, summaries, classifications, and decisions. It can operate upstream from business judgments or directly within them. It can affect customer communications, hiring decisions, compliance monitoring, internal investigations, financial analysis, and reporting workflows. So the central governance challenge is not whether AI exists in the enterprise. It is how much authority the company is willing to give it, in what contexts, with what controls, and with what margin for error. If you do not define that, you do not have AI governance. You have AI optimism.

What Is AI Risk Appetite?

At its core, AI risk appetite is the level and type of AI-related risk an organization is willing to accept in pursuit of business value. That includes a series of questions boards ought to be asking. How much error is acceptable in AI-generated output before a human must intervene? Which uses are low-risk productivity enhancements, and which are sensitive, consequential, or reputation-threatening? In what contexts can AI make recommendations only, and in what contexts can it influence or automate action? How much dependence on opaque third-party models is acceptable? What degree of explainability does the company require for different use cases? When does speed stop being a benefit and start becoming exposure?

Many boards are currently discussing AI deployment without ever discussing AI tolerance. That is like approving a global third-party strategy without deciding what level of distributor risk, sanctions exposure, or bribery risk the company is prepared to accept. No compliance professional would recommend that. Yet in AI, organizations do versions of it every day.

Why Boards Avoid the Conversation

There are several reasons boards have been slow to engage on AI risk appetite.

First, the technology moves fast, and the terminology can become a fog machine. Directors do not want to look uninformed, so discussions often stay broad and strategic. Second, management may not yet have the internal inventory or classification framework needed to make a risk-appetite conversation concrete. Third, many companies are still in an experimentation phase, which creates the illusion that formal governance can come later. Fourth, there is a natural tendency to believe AI risk belongs to IT, legal, or security, rather than to enterprise oversight.

AI risk appetite cannot be delegated away because it intersects with business judgment, ethics, records, privacy, data governance, resilience, and culture. It cuts across functions. It also cuts across reputational boundaries. If a company uses AI in a way that produces unfair results, faulty decisions, poor disclosures, or customer harm, nobody is going to say, “Well, that was a technical issue, so the board need not have been involved.” Boards do not get a hall pass when the governance system is missing.

The Conversations Boards Need to Be Having

Risk Map. The first conversation is about where AI sits on the company’s risk map. Is AI a productivity tool, a strategic platform, a decision-support capability, or some combination of all three? The answer matters because it affects the level of oversight. A company using AI for internal drafting support faces one type of exposure. A company using AI in customer-facing interactions, underwriting, hiring, fraud detection, or compliance monitoring faces another challenge.

Decision Significance. Boards need to ask where AI is being used in decisions that affect legal rights, financial outcomes, customer treatment, employment status, compliance judgments, or public disclosures. Not all uses are equal. A board that treats AI use in marketing copy the same as AI use in employee discipline is not governing. It is lumping.

Acceptable Error and Human Review. Boards should ask: what level of inaccuracy can the company tolerate in a given use case, and who is accountable for checking the output before action is taken? Human oversight has become one of those phrases everybody likes, and few define. Directors need something more disciplined. When is review mandatory? What does a meaningful review look like? What evidence shows that the reviewer is not simply rubber-stamping machine output?

Data and Model |Dependency. What data is being used? Who owns it? Who has the right to it? How current is it? Are third-party vendors changing capabilities under existing contracts? Is the company becoming dependent on systems it does not fully understand or cannot easily audit? Boards should not need to know how the engine works, but they absolutely need to know whether the company is driving a car with uncertain brakes.

Incident Tolerance and Escalation. What types of AI failures must be reported to senior leadership or the board? A hallucinated internal memo may be embarrassing. A flawed AI-assisted hiring screen or customer communication may be far more serious. The board should ensure management has defined materiality thresholds before an incident occurs, not after the headlines begin.

The CCO’s Role in Shaping the Conversation

This is where compliance officers can be enormously helpful.

The CCO is often the person in the enterprise most experienced at turning abstract risk into operating discipline. Compliance knows how to frame risk-based governance. It knows how to create escalation structures, policy frameworks, investigations protocols, and oversight dashboards. It knows that culture and control design matter just as much as rules. Here are four ways to do so.

  1. A CCO can help management develop a tiered inventory of AI use cases. This is essential. Boards cannot discuss appetite in the abstract. They need to see the map. Which uses are low risk? Which are medium? Which are high? Which are prohibited absent specific approval?
  2. Compliance can help translate legal, ethical, and operational concerns into board-level language. Directors do not need a seminar on neural networks. They need clear framing around consequences, control points, accountabilities, and thresholds.
  3. A CCO can help build governance around human review, documentation, and escalation. If the company says a human is responsible, compliance can help test whether that responsibility is real, documented, and operational.
  4. Compliance can keep the conversation grounded in how people actually behave. Employees will choose convenience. Business teams will move quickly. Vendors will market aggressively. Managers may trust the generated output more than they should. A good compliance officer knows that policy must be built for actual human behavior, not ideal behavior.

Compliance as Risk Mitigation and Business Enablement

One of the enduring frustrations in compliance is that governance is often viewed as a speed bump until something goes wrong. AI gives us another chance to make the larger point. Governance does not slow innovation. Bad governance slows innovation by causing rework, distrust, remediation, and public embarrassment.

A well-defined AI risk appetite does the opposite. It gives the business clarity. It tells innovation teams where they can move quickly and where they must slow down. It helps procurement negotiate the right terms. It helps managers know when to escalate. It helps employees understand when they may rely on AI and when they must verify it. Most importantly, it gives the board a strategic rather than reactive basis for oversight.

That is compliance at its best. Not Dr. No, from the Land of “no,” but the function that makes responsible growth possible.

Final Thoughts

Boards need not fear AI. But they do need to govern it. And governance begins with clarity about appetite. If your board has discussed an AI opportunity but not AI tolerance, it has only had half the conversation. If your company has adopted tools but has not defined acceptable levels of error, autonomy, dependency, and oversight, it is operating on hope. Hope, as every compliance professional knows, is not a strategy and certainly not a control.

Here are the questions I would leave you with. Has your board defined what level of AI-driven decision risk it is willing to accept? Can management explain how that appetite changes across low-risk and high-risk use cases? And can your compliance function show, with evidence, whether the company is operating inside those lines? If the answer is no, then the conversation boards may be the most important AI conversation of all.

Categories
AI Today in 5

AI Today in 5: April 1, 2026, The AI Down Under Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Compliance is not enough. (QA Financial)
  2. AI is a confident liar. (Healthcare IT Today)
  3. CA to ban vendors who can prove no AI bias. (Computer World)
  4. Jamie Dimon says AI will shorten the work week to 3.5 days. (CBS News)
  5. How Australia is operationalizing AML compliance. (FinTech Global)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Innovation in Compliance

Innovation in Compliance: From MVP to MVF: Governing AI Agents with Guardrails, Policy-as-Code, and Board Oversight with Aravind Parthasarathy

Innovation occurs across many areas, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode,  host Tom visits with Aravind Parthasarathy, Vice President, Client Partner for Telco & Tech at NewRocket, a ServiceNow implementation company focused on helping large enterprises adopt agentic AI.

They discuss the shift from viewing AI as a tool to treating it as an operator with humans as mentors handling exceptions, and what this means for compliance, GRC, and risk management. Aravind contrasts minimum viable product (MVP) with minimum viable function (MVF), emphasizing end-to-end autonomous business functions, probabilistic performance, and continuous learning. They cover governance needs, including guardrails, policy-as-code, auditability of agent decisions, model drift monitoring, and automated “trust but verify.” Aravind provides a telecom outage-troubleshooting example with compliance notification obligations, addresses board-level AI governance using emerging standards like ISO 42001, suggests KPIs (accuracy, autonomy), recalibrates operational metrics, and introduces “context graphs” to capture decision data over time.

Key highlights:

  • AI From Tool to Operator
  • Compliance in the MVF Era
  • Trust but Verify at Scale
  • Scaling to Multi-Agent Systems
  • Board Level AI Governance
  • Misconceptions and Practical Next Steps

Resources:

Aravind Parthasarathy on LinkedIn

NewRocket Website

Innovation in Compliance is a multi-award-winning podcast that was recently ranked Number 4 in Risk Management by 1,000,000 Podcasts.

Categories
AI Today in 5

AI Today in 5: March 31, 2026, The AI and False Arrest Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. AI for API security. (GovInfoSecurity)
  2. Using AI for SEC filings research. (BusinessWire)
  3. AI-based facial recognition leads to false arrests. (CNN)
  4. Visa prepares for AI-initiated transactions. (AINews)
  5. Can AI help with financial literacy? (FinTechMagazine)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
FCPA Compliance Report

FCPA Compliance Report: Buying Blind: AI Procurement Risks Ethics with Jessica Tillipman

In this episode, Tom Fox welcomes Jessica Tillipman, Associate Dean for Government Procurement Law Studies; Government Contracts Advisory Council Distinguished Professorial Lecturer in Government Contracts Law, Practice & Policy. We take a deep dive into federal procurement and compliance.

We begin with Tillipman’s recent article “Buying Blind: Corruption Risk and the Erosion of Oversight in Federal AI Procurement.” Tillipman explains how her initial focus on AI as a tool to reduce procurement risk shifted after finding instances of AI exploitation and U.S. regulatory changes, raising concerns that contracting practices (commercial terms, limited audit rights, reduced testing and documentation) worsen AI’s inherent opacity. She contrasts government contracting’s “superpower” rights with transparency and competition mandates tied to taxpayer funds and discusses procurement tradeoffs between speed and oversight. Tillipman distinguishes fraud from waste and abuse, warning against conflating categories. She analyzes GSA’s proposed AI clause as overdue, overly broad, and potentially unworkable, and stresses the importance of explainability, human oversight, and due process for consequential AI use. The conversation highlights procurement as a major corruption and compliance risk area and the need to invest in people and integrated teams.

Key highlights:

  • Government vs Private Contracting
  • Procurement Blind Spots
  • AI Procurement Black Box
  • Fraud, Waste, and Abuse
  • GSA AI Clause Debate
  • Training Future Leaders

Resources:

Jessica Tillipman at GW Law

Jessica Tillipman at LinkedIn

Jessica Tillipman Website

Jessica Tillipman Publication

Buying Blind: Corruption Risk and the Erosion of Oversight in Federal AI Procurement

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Daily Compliance News

Daily Compliance News: March 27, 2026, The Meta Moment Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • The jury spanked Meta and YouTube. (WSJ)
  • Former Taipei Mayor sentenced to 17 years for corruption. (Reuters)
  • A corruption prosecution to benefit Rubio? (NYT)
  • EY sets aside record £188MM for fines and penalties. (FT)
Categories
AI in Healthcare

AI in Healthcare: Five Healthcare AI Stories You Need to Know This Week – March 27, 2026

Welcome to AI in Healthcare in 5 Stories. This podcast is a Weekly Briefing of the five most important AI developments shaping healthcare, medicine, and life sciences. Each week, Tom Fox breaks down the latest stories in clinical innovation, regulation, privacy, compliance, patient safety, and operational transformation through a practical, business-focused lens. Designed for healthcare compliance professionals, executives, legal teams, clinicians, and industry leaders, the podcast moves beyond headlines to explain what each development means in the real world.

The top five stories for the week ending March 27, 2026, include:

  1. GenAI for healthcare. (The Hastings Center)
  2. Responsible AI in healthcare. (Cisco)
  3. How Oracle is transforming healthcare. (CloudWars)
  4. 1in 3 adults is using chatbots for healthcare. (ModernHealthcare)
  5. AI in healthcare administration. (The AI Journal)

For more information on the use of AI in Compliance programs, Tom Fox’s new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.