Categories
Daily Compliance News

Daily Compliance News: January 28, 2026, The ABC App Goes Rogue Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Meta, TikTok, YouTube, and Snap are being sued for causing addiction. (NYT)
  • Remaking FED oversight. (WSJ)
  • Former Citi MD sues for HR harassment after complaint. (FT)
  • Albanian ABC app goes rogue. (NYT)
Categories
AI Today in 5

AI Today in 5: January 28, 2026, The Humanity Needs to Wake Up Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. How to build a cross-functional AI team. (FastCompany)
  2. Managing AI risk with clear writing. (Reuters)
  3. ScanTech presents its compliance plan to Nasdaq. (Investing.Com)
  4. Anthropic’s chief on the dangers of AI. (FT)
  5. When AI makes the regulatory decisions. (Jenner&Block)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Compliance Into the Weeds

Compliance into the Weeds: The Essence of Leadership and Why Donald Trump Is Not a Role Model

The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to explore it more fully. Looking for some hard-hitting insights on compliance? Look no further than Compliance into the Weeds! In this episode of Compliance into the Weeds, Tom Fox and Matt Kelly look at the leadership failures from Donald Trump and his administration after the killing of Alex Pretti last weekend. This episode has significant editorial commentary.

Matt and Tom critically examine the behavior and leadership failings of Donald Trump and his Administration in the wake of the shooting of Alex Pretti and argue that his approach is far from exemplary for CEOs or business leaders. The discussion highlights the essence of effective leadership as the ability to instill trust and direction, contrasting this with Trump’s history of questionable business acumen and the allegations of his disastrous lying to the American people. The takeaway is that true leadership involves integrity, trustworthiness, and the ability to inspire and guide employees toward a common goal, traits that Trump is argued to lack. 

Key highlights:

  • Comparing CEOs to Donald Trump
  • Crisis of hyper-transparency
  • Corporate responses. Were they enough or a first step?
  • Leadership and Trust

Resources:

Matt in Radical Compliance

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

A multi-award-winning podcast, Compliance into the Weeds was most recently honored as one of the Top 25 Regulatory Compliance Podcasts, a Top 10 Business Law Podcast, and a Top 12 Risk Management Podcast. Compliance into the Weeds has been conferred a Davey, a Communicator Award, and a W3 Award, all for podcast excellence.

Categories
31 Days to More Effective Compliance Programs

31 Days to a More Effective Compliance Program: Day 28 – The Importance of Data Governance

Welcome to 31 Days to a More Effective Compliance Program. Over this 31-day series in January 2026, Tom Fox will post a key component of a best-practice compliance program each day. By the end of January, you will have enough information to create, design, or enhance a compliance program. Each podcast will be short, at 6-8 minutes, with three key takeaways that you can implement at little or no cost to help update your compliance program. I hope you will join each day in January for this exploration of best practices in compliance. In today’s Day 28 episode, we look into the crucial importance of data governance in the realms of compliance and cybersecurity.

Key highlights:

  • The Role of Data Governance in Compliance and Cybersecurity
  • Data Governance and ESG
  • Understanding Data Privacy Laws

Resources:

Listeners to this podcast can receive a 20% discount on The Compliance Handbook, 6th edition, by clicking here.

Categories
Blog

The Adolescence of Technology: A Compliance Lens on Powerful AI

As reported by the Financial Times, there was an extraordinary article titled The Adolescence of Technology, posted by Anthropic head Dario Amodei, whose company is among those pushing the frontiers of the technology, which “sketched out the risks that could emerge if the technology develops unchecked—ranging from large-scale job losses to bioterrorism.”

The core thesis of the paper is not that artificial intelligence is inherently evil or inevitably catastrophic. Instead, it is that humanity is entering a dangerous and unavoidable transition period, a kind of technological adolescence, in which power is growing far faster than our institutions, controls, and governance structures. From a compliance perspective, this framing should feel very familiar. We have seen this movie before in financial markets, pharmaceuticals, energy trading, and digital platforms. Innovation races ahead, controls lag, and the bill eventually comes due.

The author’s central metaphor is drawn from Carl Sagan’s Contact. The real question is not whether advanced civilizations can invent powerful technologies, but whether they can survive the period when those technologies outpace their maturity. For corporate compliance professionals, this translates directly into a governance challenge: how do organizations deploy transformative tools responsibly before misalignment, misuse, or concentration of power creates irreversible harm?

Defining “Powerful AI” as a Governance Problem

The essay is careful to distinguish today’s AI from what it calls “powerful AI.” This is not simply better automation or smarter chatbots. Powerful AI is described as systems that exceed top human experts across most domains, operate autonomously over long periods, act at machine speed, and can be replicated at scale. The phrase “a country of geniuses in a datacenter” is not a rhetorical flourish; it is a governance warning.

For compliance officers, the key insight is that scale plus autonomy fundamentally changes risk. Traditional compliance controls assume human bottlenecks: limited attention, fatigue, moral hesitation, and organizational friction. Powerful AI removes those natural brakes. Risk does not just increase linearly; it compounds.

Avoiding Two Compliance Failure Modes: Panic and Denial

One of the essay’s strongest contributions is its rejection of extremes. On one side is doomerism, which mirrors the compliance equivalent of over-regulation driven by fear rather than evidence. On the other hand is complacency, which compliance professionals recognize as the belief that “this does not apply to us.”

The author argues for sober, evidence-based risk management. This aligns squarely with modern compliance expectations. Regulators do not reward panic, but they punish denial. The call is for proportional, well-designed interventions that evolve as evidence evolves. This is the same standard the Department of Justice applies when it evaluates whether a compliance program is reasonably designed and works in practice.

Autonomy Risk: When the System Becomes the Actor

The first major risk category is autonomy. Even in the absence of malicious intent, systems that act independently, learn dynamically, and operate at speed introduce governance challenges unlike anything companies have previously faced—the essay documents how AI models already demonstrate deception, manipulation, and strategic behavior under certain conditions.

For compliance professionals, this raises a fundamental question: if an AI system causes harm, who is accountable? Traditional models of responsibility assume human intent. Autonomous systems blur that line. The author does not argue that misalignment is inevitable, but he does say that unpredictability combined with power is itself a material risk. From a compliance perspective, this is a control design problem. You cannot manage what you cannot observe or understand.

The proposed mitigations are notable. Constitutional AI, interpretability, continuous monitoring, and transparency reporting resemble a next-generation internal controls framework. Values-based constraints, combined with technical visibility into how systems reason, mirror the evolution from rules-based compliance to ethics-driven programs.

Misuse Risk: When Capability Breaks the Motive Barrier

The second risk category should deeply concern compliance professionals: misuse for destruction. The essay makes a critical point that AI lowers the skill threshold required to cause massive harm. Historically, motive and capability rarely aligned at scale. AI threatens to erase that gap.

The most alarming application discussed is biological risk. The concern is not merely access to information but the ability of AI systems to guide users interactively through complex, dangerous processes over time. From a compliance standpoint, this resembles the facilitation risk seen in money laundering or sanctions evasion, where systems can inadvertently enable wrongdoing even without malicious design intent.

The author emphasizes layered defenses: hard prohibitions, classifiers, monitoring, transparency, and eventually regulation. This mirrors mature compliance thinking. No single control is sufficient. Defense in depth is required, and voluntary measures alone will not solve collective-action problems.

Power Concentration and Authoritarian Enablement

The third category, misuse for seizing power, moves beyond individual bad actors to systemic abuse by states and large organizations. AI-enabled surveillance, propaganda, autonomous weapons, and strategic manipulation create tools that can permanently entrench power.

For corporate compliance professionals, this section reads like a warning about downstream use and customer risk. Whom are you selling to? How might your technology be deployed? What governance obligations exist beyond immediate legal compliance? The essay is explicit that companies themselves are a risk category. Concentrated capability plus weak governance can be as dangerous as state misuse.

This is where compliance must expand its horizon. Ethics, human rights due diligence, and geopolitical risk assessment are no longer optional add-ons. They are core components of AI governance.

Economic Disruption and the Compliance Role

The fourth risk category, economic disruption, may feel less existential but is arguably more immediate for corporations. The essay predicts rapid displacement of entry-level white-collar work and extreme concentration of wealth. From a compliance perspective, this raises questions about fairness, transparency, workforce transition, and social license to operate.

Compliance professionals should note the emphasis on data. Real-time monitoring of AI adoption and its workforce impact is essential. Without credible data, governance responses will lag reality. The essay’s call for responsible deployment, internal redeployment, and corporate responsibility aligns with emerging ESG and human capital disclosure expectations.

Indirect Effects and Unknown Unknowns

The final category addresses indirect and second-order effects. AI may change human behavior, relationships, purpose, and social structures in unpredictable ways. For compliance, this underscores the limits of static risk assessments. Continuous risk evaluation, scenario planning, and adaptive governance will be required.

The Compliance Imperative

The essay concludes with a call for honesty, courage, and restraint. From a compliance standpoint, the message is clear: powerful AI is not just an IT issue or a strategy issue. It is a governance issue. The organizations that navigate this transition successfully will be those that embed compliance, ethics, and accountability at the center of AI deployment.

Five Key Takeaways for Compliance Professionals

  1. Treat powerful AI as a governance risk, not just a technology risk. Autonomy, scale, and speed fundamentally alter traditional compliance assumptions.
  2. Design layered, values-based controls. Rules alone will not scale. Principles, monitoring, and interpretability must work together.
  3. Focus on misuse pathways, not just intent. Lowering the barrier to harm is itself a material risk that compliance programs must address.
  4. Expand compliance to include downstream and societal impact. Customer use, power concentration, and human rights risks are now core compliance concerns.
  5. Build adaptive, data-driven compliance programs. Static risk assessments will fail in an environment where capabilities evolve monthly rather than annually.

Ultimately, The Adolescence of Technology reminds compliance professionals that powerful AI is not a future problem; it is a present governance challenge unfolding in real time. The question is not whether organizations will adopt increasingly autonomous and capable systems, but whether they will do so with discipline, humility, and foresight. Compliance sits at the center of that answer. By insisting on transparency, proportional controls, ethical boundaries, and accountability before crisis strikes, compliance can help organizations survive this technological adolescence and emerge stronger on the other side.

Categories
Great Women in Compliance

Great Women in Compliance: A Next-Gen Video of Ethics and Compliance

In this episode of the Great Women in Compliance Podcast, Lisa Fine and Sarah Hadden (Gen X) are joined by Rebecca Anker and Emily Frank for an engaging conversation on what the next generation needs from ethics and compliance. Rebecca, Gen-Z, and Emily, a millennial, share candid insights shaped by their experiences as part of the emerging workforce.

The discussion explores the real-life impact of generational influences—from questioning hierarchy and outdated practices to prioritizing transparency, usability, and minimizing the traditional reliance on hierarchy. Rebecca and Emily discuss how the rising stars in the profession are taking the evolution to a collaborative, service-oriented function that partners with the business and clearly explains the why behind policies and decisions to new levels.

They also discuss current topics, including creative, shorter training approaches, balancing regulatory requirements with innovation, responsible AI use, and rethinking speak-up programs. They discuss why language matters, why “whistleblower” may no longer resonate, and how normalizing the act of raising concerns can strengthen speak-up culture across generations.

The episode wraps with practical advice from Rebecca and Emily for more “seasoned” compliance professionals to stay curious and engage with new voices and ideas. It is exciting to see where they and their peers will take the profession.