The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to explore it more fully. Looking for some hard-hitting insights on compliance? Look no further than Compliance into the Weeds! In this episode of Compliance into the Weeds, Tom Fox and Matt Kelly look at the recent hack of McKinsey’s AI tool Lilli.
Tom and Matt discuss a Financial Times report that a white-hat hacker, Paul Price of one-person firm Code Wall, exploited flaws in McKinsey’s internal AI tool “Lilli” to access millions of internal chat messages, view sensitive client-related file names, and see the model weights used to train the system; McKinsey patched the vulnerabilities after disclosure. They argue that the incident highlights emerging AI risks beyond traditional cybersecurity, including AI agents autonomously scouting for targets, the possibility of attackers altering models to alter outputs and create hard-to-detect “drift,” and confusion over who within organizations owns AI security and governance. The episode also explores the messy, inconsistent landscape of disclosure for AI-related incidents and urges compliance and GRC leaders to slow AI adoption, pressure-test systems, clarify accountability, ensure kill-switch/manual fallback capabilities, and consider reputational fallout.
Key highlights:
- McKinsey AI Hack Overview
- Three Big Implications
- Model Drift and Tampering
- GRC Playbook for AI Risk
- Accountability and Kill Switches
Resources:
Matt in Radical Compliance
Tom
A multi-award-winning podcast, Compliance into the Weeds was most recently honored as one of the Top 25 Regulatory Compliance Podcasts, a Top 10 Business Law Podcast, and a Top 12 Risk Management Podcast. Compliance into the Weeds has been conferred a Davey, a Communicator Award, and a W3 Award, all for podcast excellence.