Categories
AI Today in 5

AI Today in 5: January 30, 2026, The Building Regulatory Trust Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Building an AI that regulators can trust. (FinTechGlobal)
  2. EU preparing to provide Digital Competition Playbook. (WSJ)
  3. AI agents shattering compliance foundations? (WebProNews)
  4. Can shopping chatbots change e-commerce? (FT)
  5. Manufacturers lead in AI adoption (SupplyChainManagementReview)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
2 Gurus Talk Compliance

2 Gurus Talk Compliance – Episode 69 – The Wind Kristy Up Edition

What happens when two top compliance commentators get together? They talk compliance, of course. Join Tom Fox and Kristy Grant-Hart in 2 Gurus Talk Compliance as they discuss the latest compliance issues in this week’s episode!

Stories this week include:

  • Tim Leissner wants a pardon.
  • The Pope says watch out for an affectionate chatbot.
  • Discrimination against white males.
  • 9 AI Risks you should be aware of.
  • Compliance officers fired for failing to escalate investigative findings.
  • Tungston rod importer pays $54.4M to settle DOJ tariff fraud allegations
  • The EU AI Act Change That No One Is Talking About
  • Are We Losing Ground? The State of Ethics & Compliance Independence
  • Will Leaving My Terrible Job Make Me Look Flaky?
  • Florida man arrested after trying TikTok challenge inside Walmart

Resources:

Kristy Grant-Hart on LinkedIn

Prove Your Worth

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Compliance and AI

Compliance and AI: Understanding AI and Cyber Risk Management with Yakir Golan

What is the intersection of AI and compliance? What about Machine Learning? Are you using ChatGPT? These questions are just three of the many we will explore in this cutting-edge podcast series, Compliance and AI, hosted by Tom Fox, the award-winning Voice of Compliance. Today, Tom visits with Yakir Golan, CEO & Co-Founder at Kovrr, who shares his professional journey from the Israeli intelligence community to his current role at Kovrr.

They discuss Kovrr’s business, focusing on Cyber Risk Quantification (CRQ) and recent developments in AI risk governance. Yakir explains the evolution of AI’s impact on business workflows and the risks posed by generative AI, including ‘insider AI scenarios.’ He emphasizes the importance of a proactive approach to managing AI risks and of using financial models to report them to executives. The conversation also touches on balancing innovation with global regulatory requirements and the need for robust governance frameworks. Yakir underscores the importance of ongoing risk assessments, sound analytics, and communication strategies to enable compliance officers and corporate leaders to manage AI and cyber risks effectively.

Key highlights:

  • Impact of AI on Cyber Risk
  • Insider AI Scenarios and Risks
  • Proactive AI Risk Management
  • Compliance Beyond Regulations
  • Future of AI and Compliance

Resources:

Yakir Golan on LinkedIn

Kovrr

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
AI Today in 5

AI Today in 5: January 29, 2026, The AI Has Competitive Advantage Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Turning AI governance into a competitive advantage. (FinTechGlobal)
  2. AI is rewriting compliance. (BleepingComputer)
  3. Decoding the human genome with AI. (NYT)
  4. Who is training AI to do your job? (FT)
  5. One way to keep AI out of the classroom. (NPR)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
TechLaw10

TechLaw10: Predictions for 2026

In this film, Punter Southall Law’s Jonathan Armstrong & Prof. Eric Sinrod discuss their predictions for 2026. This is episode 296 in the popular TechLaw10 series. You can listen to earlier podcasts here. Eric & Jonathan also talk about:

  • AI laws & regulation + the patchwork nature of AI law in the US
  • AI vacuums & AI-assisted search (see the article here)
  • Political responses to AI, including the Grok nudification scandal, TikTok separation & DeepSeek
  • Changes to US rules on patents
  • The issues with Shadow AI
  • The rise in vendor compromises & cybersecurity challenges
  • The chances of the EU Digital Omnibus passing
  • Changes to data privacy enforcement, including in Indiana, Kentucky & Rhode Island
  • How sanctions can affect the tech landscape
  • The dangers of hallucinations, aka AI lying

Resources:

There are FAQs on the EU AI Act here

A glossary of AI terms is also available here.

There’s also a summary of Italy’s new AI law here.

Our previous podcast on AI literacy is here. Jonathan talks briefly about his work on the NYSBA AI Task Force. Details can be found here.

Eric Sinrod’s details can be found here, and Jonathan Armstrong’s details are available here.

The TechLaw10 LinkedIn group is here.

Categories
AI Today in 5

AI Today in 5: January 28, 2026, The Humanity Needs to Wake Up Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. How to build a cross-functional AI team. (FastCompany)
  2. Managing AI risk with clear writing. (Reuters)
  3. ScanTech presents its compliance plan to Nasdaq. (Investing.Com)
  4. Anthropic’s chief on the dangers of AI. (FT)
  5. When AI makes the regulatory decisions. (Jenner&Block)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Blog

The Adolescence of Technology: A Compliance Lens on Powerful AI

As reported by the Financial Times, there was an extraordinary article titled The Adolescence of Technology, posted by Anthropic head Dario Amodei, whose company is among those pushing the frontiers of the technology, which “sketched out the risks that could emerge if the technology develops unchecked—ranging from large-scale job losses to bioterrorism.”

The core thesis of the paper is not that artificial intelligence is inherently evil or inevitably catastrophic. Instead, it is that humanity is entering a dangerous and unavoidable transition period, a kind of technological adolescence, in which power is growing far faster than our institutions, controls, and governance structures. From a compliance perspective, this framing should feel very familiar. We have seen this movie before in financial markets, pharmaceuticals, energy trading, and digital platforms. Innovation races ahead, controls lag, and the bill eventually comes due.

The author’s central metaphor is drawn from Carl Sagan’s Contact. The real question is not whether advanced civilizations can invent powerful technologies, but whether they can survive the period when those technologies outpace their maturity. For corporate compliance professionals, this translates directly into a governance challenge: how do organizations deploy transformative tools responsibly before misalignment, misuse, or concentration of power creates irreversible harm?

Defining “Powerful AI” as a Governance Problem

The essay is careful to distinguish today’s AI from what it calls “powerful AI.” This is not simply better automation or smarter chatbots. Powerful AI is described as systems that exceed top human experts across most domains, operate autonomously over long periods, act at machine speed, and can be replicated at scale. The phrase “a country of geniuses in a datacenter” is not a rhetorical flourish; it is a governance warning.

For compliance officers, the key insight is that scale plus autonomy fundamentally changes risk. Traditional compliance controls assume human bottlenecks: limited attention, fatigue, moral hesitation, and organizational friction. Powerful AI removes those natural brakes. Risk does not just increase linearly; it compounds.

Avoiding Two Compliance Failure Modes: Panic and Denial

One of the essay’s strongest contributions is its rejection of extremes. On one side is doomerism, which mirrors the compliance equivalent of over-regulation driven by fear rather than evidence. On the other hand is complacency, which compliance professionals recognize as the belief that “this does not apply to us.”

The author argues for sober, evidence-based risk management. This aligns squarely with modern compliance expectations. Regulators do not reward panic, but they punish denial. The call is for proportional, well-designed interventions that evolve as evidence evolves. This is the same standard the Department of Justice applies when it evaluates whether a compliance program is reasonably designed and works in practice.

Autonomy Risk: When the System Becomes the Actor

The first major risk category is autonomy. Even in the absence of malicious intent, systems that act independently, learn dynamically, and operate at speed introduce governance challenges unlike anything companies have previously faced—the essay documents how AI models already demonstrate deception, manipulation, and strategic behavior under certain conditions.

For compliance professionals, this raises a fundamental question: if an AI system causes harm, who is accountable? Traditional models of responsibility assume human intent. Autonomous systems blur that line. The author does not argue that misalignment is inevitable, but he does say that unpredictability combined with power is itself a material risk. From a compliance perspective, this is a control design problem. You cannot manage what you cannot observe or understand.

The proposed mitigations are notable. Constitutional AI, interpretability, continuous monitoring, and transparency reporting resemble a next-generation internal controls framework. Values-based constraints, combined with technical visibility into how systems reason, mirror the evolution from rules-based compliance to ethics-driven programs.

Misuse Risk: When Capability Breaks the Motive Barrier

The second risk category should deeply concern compliance professionals: misuse for destruction. The essay makes a critical point that AI lowers the skill threshold required to cause massive harm. Historically, motive and capability rarely aligned at scale. AI threatens to erase that gap.

The most alarming application discussed is biological risk. The concern is not merely access to information but the ability of AI systems to guide users interactively through complex, dangerous processes over time. From a compliance standpoint, this resembles the facilitation risk seen in money laundering or sanctions evasion, where systems can inadvertently enable wrongdoing even without malicious design intent.

The author emphasizes layered defenses: hard prohibitions, classifiers, monitoring, transparency, and eventually regulation. This mirrors mature compliance thinking. No single control is sufficient. Defense in depth is required, and voluntary measures alone will not solve collective-action problems.

Power Concentration and Authoritarian Enablement

The third category, misuse for seizing power, moves beyond individual bad actors to systemic abuse by states and large organizations. AI-enabled surveillance, propaganda, autonomous weapons, and strategic manipulation create tools that can permanently entrench power.

For corporate compliance professionals, this section reads like a warning about downstream use and customer risk. Whom are you selling to? How might your technology be deployed? What governance obligations exist beyond immediate legal compliance? The essay is explicit that companies themselves are a risk category. Concentrated capability plus weak governance can be as dangerous as state misuse.

This is where compliance must expand its horizon. Ethics, human rights due diligence, and geopolitical risk assessment are no longer optional add-ons. They are core components of AI governance.

Economic Disruption and the Compliance Role

The fourth risk category, economic disruption, may feel less existential but is arguably more immediate for corporations. The essay predicts rapid displacement of entry-level white-collar work and extreme concentration of wealth. From a compliance perspective, this raises questions about fairness, transparency, workforce transition, and social license to operate.

Compliance professionals should note the emphasis on data. Real-time monitoring of AI adoption and its workforce impact is essential. Without credible data, governance responses will lag reality. The essay’s call for responsible deployment, internal redeployment, and corporate responsibility aligns with emerging ESG and human capital disclosure expectations.

Indirect Effects and Unknown Unknowns

The final category addresses indirect and second-order effects. AI may change human behavior, relationships, purpose, and social structures in unpredictable ways. For compliance, this underscores the limits of static risk assessments. Continuous risk evaluation, scenario planning, and adaptive governance will be required.

The Compliance Imperative

The essay concludes with a call for honesty, courage, and restraint. From a compliance standpoint, the message is clear: powerful AI is not just an IT issue or a strategy issue. It is a governance issue. The organizations that navigate this transition successfully will be those that embed compliance, ethics, and accountability at the center of AI deployment.

Five Key Takeaways for Compliance Professionals

  1. Treat powerful AI as a governance risk, not just a technology risk. Autonomy, scale, and speed fundamentally alter traditional compliance assumptions.
  2. Design layered, values-based controls. Rules alone will not scale. Principles, monitoring, and interpretability must work together.
  3. Focus on misuse pathways, not just intent. Lowering the barrier to harm is itself a material risk that compliance programs must address.
  4. Expand compliance to include downstream and societal impact. Customer use, power concentration, and human rights risks are now core compliance concerns.
  5. Build adaptive, data-driven compliance programs. Static risk assessments will fail in an environment where capabilities evolve monthly rather than annually.

Ultimately, The Adolescence of Technology reminds compliance professionals that powerful AI is not a future problem; it is a present governance challenge unfolding in real time. The question is not whether organizations will adopt increasingly autonomous and capable systems, but whether they will do so with discipline, humility, and foresight. Compliance sits at the center of that answer. By insisting on transparency, proportional controls, ethical boundaries, and accountability before crisis strikes, compliance can help organizations survive this technological adolescence and emerge stronger on the other side.

Categories
AI Today in 5

AI Today in 5: January 27, 2026, The Ensembling AI Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Ensembling AI to improve compliance. (WSJ)
  2. Zero Trust data governance is key to preventing AI slop. (CIO)
  3. Doctors are seeing more positives from AI. (ABC News)
  4. Humans are more important in the age of AI. (FT)
  5. The major AI trends impacting KYC compliance. (FinTech Global)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Daily Compliance News

Daily Compliance News: January 27, 2026, The Geodata Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Santander fined for AML oversights. (Bloomberg)
  • TikTok to collect precise user geo-data. (BBC)
  • DOT cancels Booz Allen contract over tax information leaks. (FT)
  • Why people matter more in the age of AI. (FT)
Categories
AI Today in 5

AI Today in 5: January 26, 2026, The Overly Affectionate Chatbots Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. The crash of Intel. (WSJ)
  2. How Americans are using AI at work. (AP)
  3. Small business use cases for AI. (Forbes)
  4. Pope Leo warns of ‘overly affectionate’ chatbots. (CNN)
  5. AI can help in KYC compliance. (FinTech Global)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.