Categories
Compliance and AI

Compliance and AI: Navigating Risk Management in the AI Era with Gaurav Kapoor

What is the role of Artificial Intelligence in compliance? What about Machine Learning? Are you using ChatGPT? These questions are just three of the many we will explore in this cutting-edge podcast series, Compliance and AI, hosted by Tom Fox, the award-winning Voice of Compliance. In this episode, Tom Fox speaks with Gaurav Kapoor, Vice Chairman, Co-Founder, and Board Member of MetricStream.

Kapoor shares his extensive professional background and the evolving landscape of risk management and compliance, emphasizing the growing importance of cybersecurity, geopolitical risks, climate impacts, and regulatory changes, all within the context of AI advancements. He also discusses how AI can streamline GRC processes, enhance decision-making capabilities, and transform traditional compliance frameworks into more strategic risk management approaches. The conversation also explores the evolving role of Chief Risk Officers and the need for a resilient, risk-aware corporate culture.

Key highlights:

  • Gaurav Kapoor’s Professional Journey
  • The Importance of July in Risk Management
  • AI’s Role in GRC
  • Emerging Risks and AI Applications
  • Counseling Boards on Risk Management
  • Top Concerns for the Rest of 2025
  • Shifting from Compliance to Risk Resilience

Resources:

MetricStream Website and on LinkedIn

Gaurav Kapoor on LinkedIn

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Regulatory Ramblings

Regulatory Ramblings: Episode 74 – Global Women in AI/Corporate Director Liability: Discretionary, Not Fiduciary with Tram Anh Nguyen and Marc I. Steinberg

In this episode, we feature two conversations exploring different frontiers of finance and technology.

In our opening Spotlight, we welcome back Marc Steinberg, professor at Southern Methodist University’s Dedman School of Law and a leading voice in securities and corporate law. His latest book, Corporate Director and Officer Liability: Discretionary, Not Fiduciary (Oxford University Press), challenges the long-standing view that corporate directors and officers should be labeled as “fiduciaries.” Steinberg examines why current liability standards — from the duty of care to the business judgment rule — are too lenient to support that label and why adopting “discretionary” as a neutral, accurate term could restore clarity and investor trust.

In the second segment, we speak with Tram Anh Nguyen, co-founder of the global digital finance education platform CFTE and Chairwoman of Global Women in AI (GWAI). She shares GWAI’s mission to close gender gaps in AI by equipping women across industries with technical knowledge, leadership skills, and mentorship. She discusses GWAI’s mission to empower women across industries to lead in AI innovation by building skills, networks, and visibility. Tram Anh emphasizes the importance of AI literacy, the barriers that hinder women from accessing AI-driven opportunities, and how GWAI facilitates connections among students, professionals, and policymakers to foster an inclusive ecosystem that shapes the future of technology.

Prof. Marc I. Steinberg is a leading expert and prodigious scholar in the field of US securities and corporate law. He is the Rupert and Lillian Radford Chair in Law and Professor of Law at SMU’s Dedman School of Law. He has served as a professor, fellow, or lectured at several other prominent universities, including HKU, the University of Cambridge, Oxford University, King’s College-University of London, Moscow State University, University of Sydney, UCLA, and the University of Pennsylvania.

Earlier in his career, he served as an attorney for the U.S. Securities and Exchange Commission (SEC) in its Division of Enforcement and Office of General Counsel. He has also been retained as an expert witness in several high-profile cases, including Enron, Martha Stewart, Mark Cuban, and the National Prescription Opioid Litigation.

Professor Steinberg is a prolific author of scholarship on US securities law, having authored approximately 150 law review articles and 50 books.

One of his recent books, Rethinking Securities Law (Oxford University Press, 2021), was awarded the Best Law Book in the United States category for 2021 by American Book Fest.

He is also editor-in-chief of The International Lawyer and The Securities Regulation Law Journal, in addition to being a member of The American Law Institute.

Tram Anh Nguyen is the chairwoman of the Global Women in AI (GWAI) group and co-founder of the London-headquartered Centre for Finance, Technology and Entrepreneurship (CFTE). GWAI is best thought of as a global community empowering women to shape the future of artificial intelligence. Its mission is to equip women across industries with the skills, networks, and visibility they need to thrive in an AI-driven world.

From aspiring professionals to seasoned leaders, the GWAI connects a diverse network of innovators, learners, and changemakers. The group offers hands-on learning experiences, leadership development, mentorship opportunities, and access to global forums—all to empower women to lead with purpose, power, and passion.

Before launching the CFTE in 2017, she had spent nearly two decades with Standard Chartered Bank in New York and Dresdner Kleinwort and UBS Wealth Management in London, advising ultra-high-net-worth clients and family offices. A recognized voice when it comes to the ‘future of work,’ Tram Anh partners with governments, central banks, and tier-one institutions worldwide to deliver large-scale reskilling programs.

She has also co-authored the world’s largest Fintech Job Report. As the founder of the Future Skills Forum, under her leadership, the forum has positioned itself as a global convener of thought leaders, policymakers, educators, and industry innovators to drive forward the agenda of human capital transformation in the age of artificial intelligence.

A champion of lifelong learning in digital finance, Tram Anh works closely with governments, regulators, and financial institutions to build future-ready workforces.

She leads initiatives that bring industry and public sector stakeholders together to design large-scale education strategies, develop forward-looking curricula, and ensure the financial sector is equipped to thrive in an AI-driven economy. Under her leadership, CFTE has expanded its global impact, educating over 260,000 alumni in more than 130 countries and collaborating with over 1,000 industry experts to accelerate the transformation of finance through education.

Discussion:

The conversation begins with some background information on Prof. Steinberg’s book. As he puts it, “For centuries, directors and officers have been identified as fiduciaries, bearing a legal and ethical duty to act in the best interests of those they represent. However, the liability standards that ordinarily exist are too lenient to be characterized as fiduciary. This misrepresentation is detrimental to the rule of law, contravenes reasonable investor expectations, and impairs the integrity of the financial markets.”

Therefore, his book, Corporate Director and Officer Liability—‘Discretionaries’ Not Fiduciaries, argues for removing a fiduciary status for corporate directors and officers, instead favoring adoption of a new, more accurate term: “Corporate directors and officers are, instead, ‘discretionaries.’” Such a term, he says, more accurately portrays the status of corporate directors and officers who are held to varying standards of liability depending on the applicable facts and circumstances.”

With such a new model in mind, “the book addresses a wide range of key issues, including the duty of care, the business judgment rule, exculpation statutes, the duty of good faith, interested director transactions, derivative litigation, mergers and acquisitions, and closely held corporations.”

A thought-provoking addition to the field, Prof. Steinberg’s book provides an alternative framework that enhances corporate governance standards while protecting corporate fiduciaries from undue liability exposure.

He shares with Regulatory Ramblings host Ajay Shamdasani what prompted him to write such a book on the topic now, as well as why it is essential to reframe the role of corporate directors and officers as “discretionaries” rather than “fiduciaries,” and what purpose it serves. As Prof. Steinberg acknowledges, it will change the legal analysis and consequently, the responsibilities and liabilities of the parties concerned. He also comments on what he believes his treatise adds to the preexisting scholarship on the matter.

Following that, we chat with Tram Anh about her background and her rationale for creating the GWAI—especially when similar such bodies already seem to exist.

Looking ahead, she sees GWAI going far and believes its best days are yet to come. As she put it, GWAI is where inspiration meets action—creating pathways for women to lead in AI, together.

From its inception, CFTE has been concerned about inclusive education—that those who want to master the vital technologies of tomorrow should be able to do so without fearing the barriers of cost, class, or their current educational, professional, or social standing. Tram Anh said that GWAI’s creation was part of a larger, longer-term goal; the same motivation that compelled her and her partner and co-founder, Huy Nguyen Trieu.

Indeed, Tram Anh believes the CFTE has come a long way, with offices on multiple continents and numerous groups and individuals receptive to its mission of democratizing the learning of fintech and related topics.

Ultimately, she believes that more needs to be done to encourage women to enter STEM fields, enabling them to contribute to the development of AI and Web3.

Regulatory Ramblings podcasts is brought to you by The University of Hong Kong – Reg/Tech Lab, HKU-SCF Fintech Academy, Asia Global Institute, and HKU-edX Professional Certificate in Fintech, with support from the HKU Faculty of Law.

Useful links in this episode:

You might also be interested in:

Connect with RR Podcast at:

LinkedIn: https://hk.linkedin.com/company/hkufintech 
Facebook: https://www.facebook.com/hkufintech.fb/
Instagram: https://www.instagram.com/hkufintech/ 
Twitter: https://twitter.com/HKUFinTech 
Threads: https://www.threads.net/@hkufintech
Website: https://www.hkufintech.com/regulatoryramblings 

Connect with the Compliance Podcast Network at:

LinkedIn: https://www.linkedin.com/company/compliance-podcast-network/
Facebook: https://www.facebook.com/compliancepodcastnetwork/
YouTube: https://www.youtube.com/@CompliancePodcastNetwork
Twitter: https://twitter.com/tfoxlaw
Instagram: https://www.instagram.com/voiceofcompliance/
Website: https://compliancepodcastnetwork.net

Categories
Trekking Through Compliance

Trekking Through Compliance: Episode 53 – Starship Oversight: AI Governance Lessons from The Ultimate Computer

One of Star Trek’s enduring gifts to corporate compliance professionals is its willingness to ask: What happens when innovation runs ahead of governance? Nowhere is this question more provocatively posed than in the classic episode “The Ultimate Computer.” As we enter an era where artificial intelligence is no longer science fiction but a business reality, “The Ultimate Computer” is required viewing for every compliance officer and governance professional. The episode’s hard lessons about control, accountability, and the limits of machine logic remain as relevant in today’s boardrooms as they were on Gene Roddenberry’s bridge.

Today, we explore five AI governance lessons, each grounded in unforgettable moments from “The Ultimate Computer” that every compliance team should consider as they guide their organizations through the brave new world of AI.

Lesson 1: Human Oversight Is Irreplaceable—AI Needs Accountable Stewards

Illustrated By: Dr. Richard Daystrom, the M-5’s creator, insists that his AI can run the Enterprise more efficiently than its human crew. He disables manual controls, leaving the starship and its fate entirely in M-5’s digital hands.

Compliance Lesson: Too often, organizations are tempted to turn complex decisions over to AI, assuming that algorithms can “do it all.” But “The Ultimate Computer” makes one fact clear: even the smartest AI requires ongoing, independent human oversight.

Lesson 2: Understand Your AI—Transparency and Explainability Are Non-Negotiable

Illustrated By: As M-5 takes control, it makes a series of decisions that the crew cannot understand.

Compliance Lesson: AI systems, especially those built with deep learning or complex algorithms, can be notoriously opaque. If even your developers can’t explain how decisions are made, you’re courting disaster.

Lesson 3: Build in Ethics from the Start—Programming Without Principles is Perilous

Illustrated By: Daystrom uploads his engrams, his personality and values, into M-5, believing that this will imbue the AI with human ethics.

Compliance Lesson: AI reflects not just the data it’s trained on, but the biases and blind spots of its creators. If you fail to embed clear ethical guidelines, guardrails, and values into your systems from the beginning, you risk unleashing “rogue AI” that optimizes for the wrong outcomes or perpetuates bias at scale.

Lesson 4: Test and Validate Continuously—Don’t Assume, Verify

Illustrated By: When exposed to the complexity and unpredictability of real-space maneuvers, M-5’s system flaws become evident only after it’s too late.

Compliance Lesson: No AI system should be considered “finished” on launch day. The real world is infinitely complex and ever-changing, and AI systems can degrade, drift, or encounter unanticipated circumstances.

Lesson 5: Assign Clear Responsibility—Accountability Can’t Be Delegated to a Machine

Illustrated By: Ultimately, it falls to Kirk to reassert human command and take responsibility for the ship’s fate.

Compliance Lesson: AI is a tool, not a scapegoat. Assigning accountability to a system erodes trust and undermines compliance. In the end, someone must always be responsible for decisions made “by the computer.”

Final ComplianceLog Reflections

The Ultimate Computer” ends with Kirk reclaiming command, but not before costly lessons are learned. For today’s compliance and governance professionals, the message is clear: you can’t outsource accountability, ethics, or oversight to a machine. As AI reshapes our organizations, we must lead with principles and prepare for the unexpected.

Resources:

Excruciatingly Detailed Plot Summary by Eric W. Weisstein

MissionLogPodcast.com

Memory Alpha

Categories
Blog

The Ultimate Computer: Five Essential AI Governance Lessons from Star Trek

One of Star Trek’s enduring gifts to corporate compliance professionals is its willingness to ask: What happens when innovation runs ahead of governance? Nowhere is this question more provocatively posed than in the classic episode “The Ultimate Computer.” As Captain Kirk and the Enterprise crew test the revolutionary M-5 computer—a prototype artificial intelligence designed to automate starship operations—they find themselves on a collision course with the ethical, operational, and human dilemmas of entrusting machines with decisions without proper oversight.

As we enter an era where artificial intelligence is no longer science fiction but a business reality, “The Ultimate Computer” is required viewing for every compliance officer and governance professional. The episode’s hard lessons about control, accountability, and the limits of machine logic remain as relevant in today’s boardrooms as they were on Gene Roddenberry’s bridge.

Today, we explore five AI governance lessons, each grounded in unforgettable moments from “The Ultimate Computer” that every compliance team should consider as they guide their organizations through the brave new world of AI.

Lesson 1: Human Oversight Is Irreplaceable—AI Needs Accountable Stewards

Illustrated By: Dr. Richard Daystrom, the M-5’s creator, insists that his AI can run the Enterprise more efficiently than its human crew. He disables manual controls, leaving the starship and its fate entirely in M-5’s digital hands. When things go wrong, Kirk and his crew struggle to regain control as M-5 begins to operate independently, with catastrophic results.

Compliance Lesson: Too often, organizations are tempted to turn complex decisions over to AI, assuming that algorithms can “do it all.” But “The Ultimate Computer” makes one fact clear: even the smartest AI requires ongoing, independent human oversight. Without it, errors go unchecked and responsibility becomes dangerously diffuse.

Corporate boards, executives, and compliance officers must ensure that all AI systems, especially those with critical business or safety functions, are subject to robust oversight. This includes clearly defined roles for monitoring, intervention, and (crucially) the ability to override the machine. Establish an AI governance framework that requires periodic human review, real-time tracking, and escalation procedures for intervention. Always preserve the “off switch.”

Lesson 2: Understand Your AI—Transparency and Explainability Are Non-Negotiable

Illustrated By: As M-5 takes control, it makes a series of decisions that the crew can’t understand. When the computer begins attacking other ships during a training exercise, killing crew members in the process, no one knows why, because M-5’s reasoning is a black box even to its creator, Daystrom.

Compliance Lesson: AI systems, especially those built with deep learning or complex algorithms, can be notoriously opaque. If even your developers can’t explain how decisions are made, you’re courting disaster. “The Ultimate Computer” demonstrates the dangers of unexplainable AI: when the stakes are high, opacity erodes trust and prevents timely intervention.

Modern AI governance must demand explainability and transparency, particularly for systems that make or recommend decisions in compliance, risk, HR, or other regulated domains. You must be able to audit, understand, and document how your AI reaches its conclusions. Mandate that all critical AI deployments include documentation of model logic, data sources, and decision-making pathways. Require “explainable AI” solutions for high-risk use cases, and build audit trails for regulatory scrutiny.

Lesson 3: Build in Ethics from the Start—Programming Without Principles is Perilous

Illustrated by Daystrom, who uploads his engrams—his personality and values—into M-5, believing that this will imbue the AI with human ethics. But he fails to account for his unresolved traumas and emotional instability, which are replicated and magnified by M-5, leading to dangerous, unethical decisions.

Compliance Lesson: AI reflects not just the data it’s trained on, but the biases and blind spots of its creators. If you fail to embed clear ethical guidelines, guardrails, and values into your systems from the beginning, you risk unleashing “rogue AI” that optimizes for the wrong outcomes or perpetuates bias at scale.

AI governance is not just a technical challenge; rather, it is an ethical mandate. Involve compliance, legal, DEI, and other stakeholders in the design phase to ensure your systems align with your organization’s values and regulatory obligations. Establish cross-functional AI ethics committees to review training data, test for bias, and define the acceptable uses and limitations of AI. Document decisions and revisit them regularly as your business and regulatory landscape evolve.

Lesson 4: Test and Validate Continuously—Don’t Assume, Verify

Illustrated By: Before full deployment, M-5 is tested only in limited scenarios. When exposed to the complexity and unpredictability of real-space maneuvers, the system’s flaws become evident only after it’s too late. The lack of ongoing testing and validation costs lives and nearly destroys the Enterprise.

Compliance Lesson: No AI system should be considered “finished” on launch day. The real world is infinitely complex and ever-changing, and AI systems can degrade, drift, or encounter unanticipated circumstances. “Set it and forget it” is not an option in AI governance.

Organizations must commit to ongoing validation, testing, and recalibration of all critical AI systems to ensure their reliability and effectiveness. This includes stress-testing under simulated “edge cases” and periodic audits against evolving compliance and risk standards. Develop a continuous monitoring and testing protocol for AI, including regular scenario-based drills, compliance checks, and real-world audits to ensure adequate oversight. Implement “red team” exercises to identify vulnerabilities and unintended consequences.

Lesson 5: Assign Clear Responsibility—Accountability Can’t Be Delegated to a Machine

Illustrated By: As M-5’s rampage escalates, command responsibility is unclear. Daystrom blames the system, the system blames its programming, and the Starfleet brass threatens to destroy the Enterprise. Ultimately, it falls to Kirk to reassert human command and take responsibility for the ship’s fate.

Compliance Lesson: AI is a tool, not a scapegoat. Assigning accountability to a system erodes trust and undermines compliance. In the end, someone must always be responsible for decisions made “by the computer.” Regulators, investors, and the public will not accept “the algorithm did it” as a defense.

Every AI deployment must have designated human owners—individuals or teams empowered (and required) to monitor, question, and take responsibility for outcomes. Define roles and responsibilities for AI oversight in policies and procedures. Assign an accountable executive (“AI owner”) for each critical system and ensure they have the necessary authority and training to perform their duties effectively.

Final ComplianceLog Reflections

The Ultimate Computer” ends with Kirk reclaiming command, but not before costly lessons are learned. For today’s compliance and governance professionals, the message is clear: you can’t outsource accountability, ethics, or oversight to a machine. As AI reshapes our organizations, we must lead with principles and prepare for the unexpected.

AI may be the “ultimate computer,” but governance remains the ultimate human challenge. As you chart your course through this new frontier, let the lessons of Star Trek remind you: the best technology serves humanity, not the other way around.

Resources:

Excruciatingly Detailed Plot Summary by Eric W. Weisstein

MissionLogPodcast.com

Memory Alpha

Categories
Blog

The Compliance Guide to Designed Intelligence: Part 2 – Rethinking Governance for the Age of AI

Yesterday, I began a two-part review of the article “What Is a Designed Intelligence Environment?” in which authors Michael Schrage and David Kiron examine how enterprises must rethink their intelligence and compliance strategies to survive and thrive in the new world of AI-rich operations. I found their insights for compliance professionals both practical and transformative. Previously, we considered what is Designed Intelligence. Tomorrow, we take a deeper dive into what it means for compliance.

For decades, we have approached compliance through policies, procedures, and periodic reviews, trusting that careful planning and diligent oversight would guide us through the challenges of regulatory change and operational risk. However, the rise of artificial intelligence has forever altered this equation. Now, the decisions that shape our organizations are made not just by people, but by increasingly autonomous machines and systems that learn, adapt, and interact in ways that can outpace human comprehension.

This new reality demands a new approach to compliance, one that goes beyond enforcing existing rules and begins to architect the very environments in which human and machine intelligence operate. The article “What Is a Designed Intelligence Environment? ” offers a timely and robust framework for this challenge. Rather than treat AI as just another tool in the compliance toolbox, it urges us to rethink how knowledge, reasoning, and governance are structured across the enterprise. For the compliance professional, this shift is as profound as it is practical: our mission is no longer to control risk but to orchestrate intelligence itself.

Five Key Takeaways for the Compliance Professional

1. Observability Over Prediction: Embrace Real-Time Monitoring

Traditional compliance programs often rely on the classic cycle of predict, plan, execute, and measure. However, as the article emphasizes, Stephen Wolfram’s principle of computational irreducibility suggests that in highly complex, AI-rich environments, outcomes cannot be predicted; they must be observed as they occur. This is not a theoretical point; rather, it is a practical call to action for compliance.

In a world where both human and machine agents make critical decisions, compliance leaders need to build systems that provide real-time visibility into these interactions. The case of the pharmaceutical R&D pipeline illustrates this vividly: instead of forcing premature rankings of drug candidates, the company built a computational observatory, allowing emergent patterns to drive decision-making. For compliance, this means investing in tools and processes that enable continuous monitoring, immediate detection of anomalies, and dynamic feedback loops, moving from static after-the-fact audits to active, ongoing oversight.

2. Semantic Formalization: Make Compliance Computable

If your compliance program still relies on lengthy policy manuals and inconsistent training, it’s time to elevate it. The article introduces the concept of semantic formalization, defining key business and compliance concepts in a manner that enables both humans and machines to execute and reason with them. This isn’t just data management; it’s about ensuring every stakeholder and system shares a common, computable language for compliance.

For example, a multinational retailer struggling with customer experience (CX) consistency turned things around by building a semantic kernel, a shared ontology for complaints, resolutions, and metrics. Compliance teams must similarly formalize definitions for key terms, including risk, conflict of interest, and reporting obligations. This creates a foundation where both human and AI agents can interpret and act on compliance requirements, ensuring consistency, auditability, and scalability.

3. Translate Between Multiple Realities

Every department, human expert, and AI system in your organization “computes” reality differently. Financial models assess risk through simulations, operations utilize failure analysis, and AI identifies statistical correlations. The article’s exploration of real space, the idea that these are not just different perspectives but fundamentally different computational rule sets, changes the compliance game.

Instead of forcing alignment through top-down mandates, compliance officers must become expert translators and orchestrators of change. The aerospace design review case proves the point: rather than punishing disagreement between engineers and AI, leadership created a real mediator, mapping and reconciling the underlying rules of each party. Compliance professionals should develop frameworks and protocols to make these internal logics explicit, resolve conflicts, and coordinate decision-making without imposing artificial consensus.

4. Do Not Simply Deploy Smarter Tools, But Architect Intelligence Environments

Throwing advanced AI or analytics at compliance problems is not enough. The article argues forcefully that intelligence, whether human or machine, must be designed into the very infrastructure of the enterprise. Most organizations still treat intelligence as an emergent property of tools, rather than an intentional product of environment design.

For compliance, this means working proactively with IT, legal, and operational leaders to design systems where intelligence (learning, reasoning, and adaptation) is orchestrated by default. Real-time observability, semantic formalization, and rule-based mediation must be built into the core of your compliance framework, not added as afterthoughts. This approach enables faster, higher-quality decisions, reduces systemic risk, and enhances organizational agility.

5. From Enforcer to Orchestrator: Redefine the Compliance Role

The most important takeaway is the redefinition of what it means to be a compliance professional in the era of AI. The future of compliance is not just about enforcing standards and conducting audits; it is about orchestrating intelligence across human and machine systems. This means guiding the translation between different rules and perspectives, architecting environments for safe collaboration, and ensuring ethical execution in a world of real-time, adaptive agents.

Compliance officers must expand their skill sets by learning the basics of AI, systems engineering, and data science, developing fluency in semantic modeling, and building cross-functional relationships with technology and business leaders. By leading the design of intelligence environments, compliance professionals can become strategic partners in innovation, not just gatekeepers of risk.

As we enter a new era defined by AI, the compliance profession finds itself at a crossroads. The systems we govern are no longer straightforward, linear, or purely human—they are dynamic, adaptive, and built from the collaboration between people and machines. The article “What Is a Designed Intelligence Environment? ” makes clear that our old tools—checklists, policy manuals, and after-the-fact audits—are no longer sufficient for the task ahead. Instead, we must build environments where intelligence itself is orchestrated, monitored, and governed by design.

This transformation is not about abandoning the core values of compliance, integrity, transparency, and accountability; it is about embracing new methods to uphold them in a complex world. We must shift from prediction to observability, from description to formalization, and from enforcement to orchestration. We must learn to translate and mediate between diverse ways of thinking and design infrastructures that enable human and machine intelligence to flourish safely and ethically.

Categories
Compliance Tip of the Day

Compliance Tip of the Day – Rethinking Corporate AI Governance Through Design Intelligence

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements. Whether you’re a seasoned compliance professional or just starting your journey, our aim is to provide you with bite-sized, actionable tips to help you stay on top of your compliance game. Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law. Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

Today we consider how enterprises must rethink their compliance strategies to survive and thrive in the new world of AI-rich operations.

For more on this topic, check out The Compliance Handbook, a Guide to Operationalizing your Compliance Program, 6th edition which was recently released by LexisNexis. It is available here.

Categories
Red Flags Rising

Red Flags Rising: S01 E20 – China, AI, and Export Controls – Facing a Moment of Truth

Mike and Brent follow-up on Episode 19’s discussion of “stack sweeps” with a discussion of the current “moment of truth” facing trade compliance teams dealing with high-probability, catch-all enforcement risks as explained in their recent WorldECR (Issue No. 141, July/August 2025) and Dow Jones Risk Journal article, “Anticipating the moment of truth: how to prepare for ‘high probability’ catch-all enforcement.” Specifically, they discuss the recent decision by the U.S. to allow (licensed) sales of certain advanced integrated circuits to China (00:42), their WorldECR/DJRJ article and how the Bureau of Industry & Security (BIS) guidance of May 13, 2025, which emphasized the “high probability” standard and catch-all provisions of the U.S. Export Administration Regulations (EAR), inspired the article (or at least inspired Tom Blass of WorldECR to ask us for an article) (07:10), how the underlying catch-all provisions are not “new” as of May 13, 2025 (10:26), how compliance teams can’t “zero-risk” export controls risk and need to adopt risk-based approaches (12:17), the relevance of the “inchoate” offenses under the EAR, i.e., aiding, abetting, conspiracy, evasion, acting with knowledge, and misrepresentations (13:21), the limitations of end-use and end-user certificates under the May 13, 2025 policy and guidance documents (14:47), their thoughts on the reportedly pending “50% rule” for the Entity List (18:32), the impact of the ability of malign actors, political parties, and military-intelligence actors to exercise influence even without shareholdings (19:25), why the most risky counterparties are those not on the Entity List (20:49), and the three key takeaways in their WorldECR/DJRJ article (24:09). They conclude with another installment of Brent Carlson’s “Managing Up” segment (30:28).

Resources:

WorldECR

Brent LinkedIn

Mike LinkedIn

Mike & Brent’s “Fresh Looks” Series

Categories
Blog

The Compliance Guide to Designed Intelligence: Part 1 – Rethinking Governance for the Age of AI

If there is one constant in the world of compliance, it is the reality of change. However, in 2025, change takes on a new vector: artificial intelligence, not just as a tool, but as a force reshaping how organizations think, decide, and act. In their article “What Is a Designed Intelligence Environment?” authors Michael Schrage and David Kiron examined how enterprises must rethink their intelligence and compliance strategies to survive and thrive in the new world of AI-rich operations. I found their insights for compliance professionals both practical and transformative. Today, I begin a short two-part blog post series on Designed Intelligence. Today, in Part 1, we consider what is meant by Designed Intelligence. Tomorrow, we take a deeper dive into what it means for compliance.

From Managing Compliance to Orchestrating Intelligence

Traditional compliance frameworks have always focused on managing risk, enforcing controls, and responding to regulatory shifts. But what happens when decision-making itself is no longer exclusively human? In a designed intelligence environment, humans and machines learn, reason, adapt, and improve together. This is not simply the automation of existing workflows; it’s the emergence of a new kind of enterprise, where “epistemic engineering”—the design of how knowledge is generated, shared, and executed—becomes the bedrock of effective compliance.

The first insight for compliance professionals is that we can no longer assume governance is solely about drawing lines around human behavior. Our job is to architect environments in which both human and machine intelligences operate responsibly and transparently, ensuring that knowledge, decisions, and accountability flow where they are needed most.

Computational Irreducibility: The End of Predictive Planning

Stephen Wolfram’s principle of computational irreducibility may sound academic, but its implications are anything but theoretical for compliance leaders. In a nutshell, this principle holds that in highly complex systems, such as those created when humans and AI interact, the future cannot be predicted without running the system in real-time. In other words, the classic compliance cycle of “predict, plan, execute, and measure” is mathematically impossible in many AI-rich contexts.

For compliance professionals, this means shifting from static policy planning to dynamic, real-time oversight. Consider an example from pharmaceutical R&D. A global company faced paralysis in prioritizing compounds for its oncology pipeline. Instead of relying on fixed rankings or endless meetings, leadership created a computational observatory: multiple agentic models simultaneously analyzed each compound from different perspectives (biological plausibility, market readiness, synthetic feasibility)—cross-model consensus and visualization, rather than managerial heuristics, guided decisions, surfacing previously hidden breakthroughs.

Compliance Lesson: Build for Observability, Not Just Control

In today’s world, compliance cannot rely solely on auditing after the fact. The future lies in building observability into the core of decision environments: real-time monitoring, feedback loops, and experimental frameworks that enable compliance to identify emergent risks as they arise, not just when it’s too late. This is the heart of “runtime intelligence.”

Semantic Formalization: Making Compliance Computable

Most compliance programs are based on documentation, training, and knowledge management. But semantic formalization, another key concept, goes much further. It requires organizations to define core business concepts (like “customer value,” “operational risk,” or “conflict of interest”) so precisely that both humans and AI agents can “compute” with them. This is not a matter of semantics for its own sake; it is about ensuring that rules, policies, and standards are unambiguously actionable by both people and machines.

For example, a multinational retailer’s use of large language models (LLMs) for customer support faced breakdowns because definitions of customer experience (CX) varied by region and role. By creating a semantic kernel, which is an enterprise ontology that maps complaints, resolution pathways, sentiment clusters, and CX metrics, the company trained its models (and its people) to reason with consistent, computable definitions. This enabled root-cause analysis and adaptive, system-wide learning that wasn’t possible in the old script-driven model.

Compliance Lesson: Define, Don’t Just Describe

Compliance teams must become architects of semantic infrastructure. That means working cross-functionally to formally define compliance concepts, risks, and obligations so that every AI, dashboard, and human team member speaks the same language, in the same way, everywhere. This is how you build “reasoning standardization” and reduce the friction, ambiguity, and risk that come with AI-driven scale.

Rulial Space: Translating Between Multiple Realities

Perhaps the most disruptive insight for compliance comes from the concept of rule-based space: the recognition that different “intelligences”—whether human teams, AI systems, or even other departments—operate under distinct rule sets, generating unique realities. Finance assesses risk through Monte Carlo simulations, operations analyze it through failure mode analysis, and AI identifies it through statistical correlations. Traditional efforts to force alignment through training or incentives may be fundamentally flawed. What is needed is translation, not assimilation.

In aerospace manufacturing, for example, friction between design engineers and LLMs led to productivity-killing standoffs. Instead of forcing one side to conform to the other, leadership installed an honest mediator: an explicit layer for mapping, negotiating, and reconciling the assumptions, rules, and heuristics of both human and AI systems. This moved the organization from “compliance by enforcement” to “compliance by comprehension,” a far more powerful and sustainable model for managing both risk and innovation.

Compliance Lesson: Become a Translator, Not Just an Enforcer

The future of compliance is not just about enforcing standards but about building systems and processes that can explicitly map and translate between different rule sets: human, machine, and hybrid. This requires cognitive compilers: protocols and infrastructure for negotiating meaning, resolving conflicts, and arbitrating outputs across diverse intelligences. The result is intelligent orchestration of more innovative, safer, and more adaptive enterprises.

Why Smarter Tools Aren’t Enough: Compliance by Design, Not Just Technology

It’s tempting to think that more innovative tools or more sophisticated AI models will solve all compliance challenges. But as the article warns, deploying intelligence as automation—without rethinking the architecture of decision environments—will leave most enterprises stuck with mediocre results. Intelligence, whether human or machine, must be designed into the very infrastructure of the organization: how decisions are made, how meaning is generated, and how value and risk are understood.

For compliance professionals, this means a dramatic expansion of your remit. You must help design the runtime environment for intelligence where learning, adaptation, and ethical execution are embedded, not bolted on. This requires technical fluency, cross-disciplinary collaboration, and a willingness to challenge the old boundaries of policy, training, and audit.

Conclusion: The Compliance Opportunity in Designed Intelligence

The transition to designed intelligence environments represents both a challenge and a once-in-a-generation opportunity for compliance leaders. Those who lean in, who help architect real-time observability, semantic formalization, and rule-based mediation, will become essential strategic partners in their organizations’ transformation. Those who don’t risk being left behind by systems they can neither see, steer, nor secure.

The era of “predict and control” is coming to an end. The age of “orchestrate and observe” is here. As compliance professionals, our calling is clear: to lead the design, governance, and stewardship of intelligence environments that are fit for the complexity and promise of AI. Only then can we ensure that innovation and integrity go hand in hand in the enterprises of tomorrow.

Join us tomorrow for Part 2, where we delve deeper into the compliance considerations.

Categories
Blog

Operationalizing AI for Compliance: Turning Potential into Practice

If you have spent any time around corporate compliance in the past several months, you have undoubtedly heard a great deal about artificial intelligence (AI). It is promised as a game changer, touted as the next big thing, and often presented with buzzwords that sound more like science fiction than practical business tools. Indeed, I wrote a book about its promise, Upping Your Game. However, compliance professionals consistently face one crucial question: How can we operationalize AI effectively within our compliance functions?

I used this title, as I have long advocated Operationalizing Compliance. Indeed, in 2016, I published a book with just that title. Therefore, in today’s blog, we will explore precisely that: how compliance leaders can strategically integrate AI solutions into existing compliance frameworks, drive effectiveness, and transform potential into sustainable value.

Understanding AI’s Value Proposition for Compliance

Operationalizing AI begins with recognizing why AI matters in the context of compliance. Fundamentally, compliance is about managing risk through monitoring, detection, investigation, and remediation. AI excels in these core compliance activities due to its ability to process massive volumes of data rapidly, identify patterns that humans may miss, and provide predictive insights.

AI, in short, enhances your compliance team’s ability to stay ahead of risk, transforming reactive processes into proactive strategies. Consider the traditional compliance approach to monitoring. Usually reliant on sampling and periodic audits, it can leave gaps for misconduct to slip through. AI-driven continuous monitoring solutions eliminate these gaps, spotting anomalies in real-time and flagging them immediately for action.

Yet, for all its promise, AI is not a “plug and play” solution. To operationalize AI, compliance teams must approach it methodically, intentionally, and with transparent governance in place.

Step 1: Define Your Objectives Clearly

The first step in operationalizing AI for compliance is clarity of purpose. Compliance leaders must define the specific outcomes they hope to achieve through AI. Ask yourself, “What problem are we trying to solve, and why is AI a suitable solution?”

Objectives may include:

  • Real-time detection of suspicious financial transactions.
  • Automated due diligence on third-party vendors.
  • Predictive analytics to flag high-risk regions or business units.
  • Enhanced hotline management through AI-powered triage.

Articulated objectives become the roadmap guiding your AI initiative, helping you select appropriate tools and measure success effectively.

Step 2: Data Readiness and Integration

Next, compliance professionals must tackle a critical operational requirement: data readiness. AI thrives on data; thus, operationalizing AI depends on ensuring your data is accessible, reliable, secure, and comprehensive.

Data silos present a significant challenge. Compliance functions often manage fragmented data from HR systems, financial databases, third-party diligence platforms, and internal reporting channels. Integrating these data streams into a unified compliance data lake or repository is a foundational step.

A successful integration strategy includes:

  • Conducting a data inventory and assessing data quality.
  • Standardizing data formats across various systems.
  • Implementing robust data governance practices ensures the accuracy and integrity of data.

Addressing these integration challenges upfront ensures your AI compliance solutions have high-quality fuel to drive accurate and valuable insights.

Step 3: Choose the Right AI Technology Partners and Tools

There’s no shortage of AI vendors promising solutions tailored for compliance needs. But choosing the right partner requires thorough due diligence, evaluating both technological capability and ethical alignment.

Compliance leaders should look for partners with:

  • Demonstrable experience in corporate compliance and regulatory environments.
  • Transparent and auditable AI algorithms to ensure explainability.
  • Robust data privacy and cybersecurity frameworks.
  • Scalable solutions that evolve with regulatory demands and business needs.

Furthermore, compliance professionals should carefully pilot and test AI solutions before implementing them on a full scale. Start small by piloting the solution within a specific compliance area, such as third-party due diligence or fraud detection, and expand gradually based on proven outcomes and clear metrics.

Step 4: Build AI Ethics into Your Compliance Framework

Operationalizing AI comes with significant ethical implications, particularly regarding bias, transparency, and accountability. Compliance officers play a pivotal role in ensuring that AI systems align with a company’s values, ethics, and regulatory expectations.

An ethical AI framework includes:

  • Regular algorithmic auditing to detect and mitigate bias.
  • Transparent processes that allow for the explainability of AI-driven decisions.
  • Mechanisms to oversee and correct AI systems continuously.

AI ethics isn’t an add-on; rather, it is integral to operationalizing AI responsibly. Compliance teams should be at the forefront of this conversation, partnering with data scientists and technology leaders to integrate ethical oversight into AI deployment from the outset.

Step 5: Training, Culture, and Change Management

Operationalizing AI also means preparing your team and organization to adapt to new ways of working. AI is not a replacement for compliance professionals; it’s a tool to augment their expertise. However, integrating AI successfully demands a culture receptive to technology-driven change.

Compliance leaders must focus on:

  • Continuous AI literacy training to ensure that compliance teams understand how to interact effectively with AI tools.
  • Establishing clear communication channels explaining AI’s role, scope, and limitations.
  • Encouraging a culture of curiosity and innovation within compliance teams, reinforcing that AI enables them to perform their roles more effectively, not replace them.

Managing organizational change proactively reduces resistance, fosters engagement, and ensures your compliance team leverages AI’s full potential.

Step 6: Establish Metrics and Measure Impact

Operationalizing AI requires rigorous performance monitoring. Compliance professionals must establish clear benchmarks and metrics to assess the effectiveness of AI continually. Typical metrics could include:

  • Reduction in false positives during transaction monitoring.
  • Improvements in detection accuracy and timeliness.
  • Reduction in compliance breaches and associated remediation costs.
  • Increased efficiency in compliance investigation processes.

These metrics provide tangible evidence of AI’s impact, allowing compliance leaders to make data-driven decisions about expanding or adjusting their AI initiatives.

Step 7: Continuous Improvement and Adaptation

Finally, operationalizing AI is not a one-time event but an ongoing cycle of continuous improvement. AI models and technologies evolve rapidly, as do regulatory environments and compliance risks. Regularly revisiting your AI strategy ensures continued alignment with organizational needs and compliance objectives.

Embrace a feedback loop approach:

  • Regularly solicit feedback from users about the AI tool’s effectiveness.
  • Stay informed about regulatory changes that may impact AI compliance practices.
  • Update algorithms and recalibrate models to maintain accuracy and relevance.

A compliance function committed to continuous learning, adaptation, and iteration is best positioned to reap long-term benefits from AI.

Turning AI from Concept to Compliance Reality (Operationalizing AI)

Operationalizing AI for compliance is not merely about adopting cutting-edge technology; it is about strategic integration, ethical oversight, proactive training, and continuous improvement. When compliance leaders approach AI thoughtfully, methodically, and responsibly, the result is transformative, turning AI’s promise into a practical reality that enhances compliance effectiveness, risk mitigation, and organizational integrity.

As compliance professionals, we stand at an exciting crossroads. AI has moved beyond theoretical potential; it is a tangible, operational reality. By clearly defining objectives, managing data effectively, choosing the right partners, embedding ethics, preparing our teams, and committing to continuous improvement, compliance can lead the way in responsibly harnessing AI’s power.

The AI revolution in compliance is here. The question is not whether compliance teams can operationalize AI but how effectively and ethically they can do so. The answer lies in the strategic, thoughtful, and deliberate steps we take today.

Categories
Innovation in Compliance

Innovation in Compliance – Allison Lagosh on Proactive Compliance Planning for Regulatory Changes

Innovation is present in many areas, and compliance professionals must not only be prepared for it but also actively embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, host Tom Fox visits with Allison Lagosh, Head of Compliance at Saifr.ai, to discuss the current and future landscape of regulatory compliance.

With over two decades of experience in asset management, compliance, and regulatory affairs, Lagosh anticipates a pivotal shift towards AI and cryptocurrency regulations. She predicts a lighter enforcement landscape but stresses the importance of a conservative, informed approach to compliance, encouraging firms to future-proof their programs by staying abreast of regulatory changes and engaging in cross-team collaboration. Her insights, shared on platforms like the “Innovation in Compliance” podcast, highlight the necessity of strong leadership support and continuous learning to effectively navigate the dynamic regulatory environment, particularly in the realm of emerging technologies.

Key highlights:

  • Regulatory Futurism: AI and Crypto Compliance
  • “Colorado’s Groundbreaking AI Safety Legislation”
  • Proactive Compliance Planning for Regulatory Changes
  • Navigating Compliance Uncertainties with AI Integration
  • Regulatory Insights on Safer.AI Website

Resources:

Allison Lagosh on LinkedIn

Saifr.ai

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Check out my latest book, Upping Your Game—How Compliance and Risk Management Move to 2023 and Beyond, available from Amazon.com.

Innovation in Compliance was recently honored as the number 4 podcast in Risk Management by 1,000,000 Podcasts.