Categories
AI Today in 5

AI Today in 5: August 8, 2025, The Don’t Wait Episode

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

For more information on the use of AI in Compliance programs, Tom Fox’s new book is Upping Your Game. You can purchase a copy of the book on Amazon.com.

Categories
Everything Compliance

Everything Compliance: Episode 158, The No to Corruption in Ukraine Edition

Welcome to this edition of award-winning Everything Compliance. In this episode, we have the quartet of Matt Kelly, Jonathan Marks, and Jonathan Armstrong, with Tom Fox, the Compliance Evangelist, sitting in as both host and a guest this week.

1. Matt Kelly looks at a couple of recent enforcement actions and what they may portend for enforcement under the Trump Administration. He shouts out to the people of Ukraine for fighting against corruption and rants about the DOJ cover-up of the Epstein files.

2. Jonathan Marks considers the leadership lessons from the recent imbroglio involving the NFL Players Association. He shouts out to Alexsys Thompson and her book, The Power of a Graceful Leader.

3. Jonathan Armstrong considers the new UK Failure to Prevent Fraud offense and highlights the city of Berlin and the people of Germany, who have taken ownership of their role in WWII.

4. Tom Fox looks at AI governance lessons through the lens of Star Trek TOS episode The Ultimate Computer and shouts out to the Lincoln Center Starbucks in NYC for supporting the Texas Hill Country and making him a part of its 5:30 AM family.

The members of Everything Compliance are:

The host, producer, and sometime panelist of Everything Compliance is Tom Fox, the Voice of Compliance. He can be reached at tfox@tfoxlaw.com.  The award-winning Everything Compliance is a part of the Compliance Podcast Network.

Categories
Trekking Through Compliance

Trekking Through Compliance: Episode 53 – Starship Oversight: AI Governance Lessons from The Ultimate Computer

One of Star Trek’s enduring gifts to corporate compliance professionals is its willingness to ask: What happens when innovation runs ahead of governance? Nowhere is this question more provocatively posed than in the classic episode “The Ultimate Computer.” As we enter an era where artificial intelligence is no longer science fiction but a business reality, “The Ultimate Computer” is required viewing for every compliance officer and governance professional. The episode’s hard lessons about control, accountability, and the limits of machine logic remain as relevant in today’s boardrooms as they were on Gene Roddenberry’s bridge.

Today, we explore five AI governance lessons, each grounded in unforgettable moments from “The Ultimate Computer” that every compliance team should consider as they guide their organizations through the brave new world of AI.

Lesson 1: Human Oversight Is Irreplaceable—AI Needs Accountable Stewards

Illustrated By: Dr. Richard Daystrom, the M-5’s creator, insists that his AI can run the Enterprise more efficiently than its human crew. He disables manual controls, leaving the starship and its fate entirely in M-5’s digital hands.

Compliance Lesson: Too often, organizations are tempted to turn complex decisions over to AI, assuming that algorithms can “do it all.” But “The Ultimate Computer” makes one fact clear: even the smartest AI requires ongoing, independent human oversight.

Lesson 2: Understand Your AI—Transparency and Explainability Are Non-Negotiable

Illustrated By: As M-5 takes control, it makes a series of decisions that the crew cannot understand.

Compliance Lesson: AI systems, especially those built with deep learning or complex algorithms, can be notoriously opaque. If even your developers can’t explain how decisions are made, you’re courting disaster.

Lesson 3: Build in Ethics from the Start—Programming Without Principles is Perilous

Illustrated By: Daystrom uploads his engrams, his personality and values, into M-5, believing that this will imbue the AI with human ethics.

Compliance Lesson: AI reflects not just the data it’s trained on, but the biases and blind spots of its creators. If you fail to embed clear ethical guidelines, guardrails, and values into your systems from the beginning, you risk unleashing “rogue AI” that optimizes for the wrong outcomes or perpetuates bias at scale.

Lesson 4: Test and Validate Continuously—Don’t Assume, Verify

Illustrated By: When exposed to the complexity and unpredictability of real-space maneuvers, M-5’s system flaws become evident only after it’s too late.

Compliance Lesson: No AI system should be considered “finished” on launch day. The real world is infinitely complex and ever-changing, and AI systems can degrade, drift, or encounter unanticipated circumstances.

Lesson 5: Assign Clear Responsibility—Accountability Can’t Be Delegated to a Machine

Illustrated By: Ultimately, it falls to Kirk to reassert human command and take responsibility for the ship’s fate.

Compliance Lesson: AI is a tool, not a scapegoat. Assigning accountability to a system erodes trust and undermines compliance. In the end, someone must always be responsible for decisions made “by the computer.”

Final ComplianceLog Reflections

The Ultimate Computer” ends with Kirk reclaiming command, but not before costly lessons are learned. For today’s compliance and governance professionals, the message is clear: you can’t outsource accountability, ethics, or oversight to a machine. As AI reshapes our organizations, we must lead with principles and prepare for the unexpected.

Resources:

Excruciatingly Detailed Plot Summary by Eric W. Weisstein

MissionLogPodcast.com

Memory Alpha

Categories
Blog

The Ultimate Computer: Five Essential AI Governance Lessons from Star Trek

One of Star Trek’s enduring gifts to corporate compliance professionals is its willingness to ask: What happens when innovation runs ahead of governance? Nowhere is this question more provocatively posed than in the classic episode “The Ultimate Computer.” As Captain Kirk and the Enterprise crew test the revolutionary M-5 computer—a prototype artificial intelligence designed to automate starship operations—they find themselves on a collision course with the ethical, operational, and human dilemmas of entrusting machines with decisions without proper oversight.

As we enter an era where artificial intelligence is no longer science fiction but a business reality, “The Ultimate Computer” is required viewing for every compliance officer and governance professional. The episode’s hard lessons about control, accountability, and the limits of machine logic remain as relevant in today’s boardrooms as they were on Gene Roddenberry’s bridge.

Today, we explore five AI governance lessons, each grounded in unforgettable moments from “The Ultimate Computer” that every compliance team should consider as they guide their organizations through the brave new world of AI.

Lesson 1: Human Oversight Is Irreplaceable—AI Needs Accountable Stewards

Illustrated By: Dr. Richard Daystrom, the M-5’s creator, insists that his AI can run the Enterprise more efficiently than its human crew. He disables manual controls, leaving the starship and its fate entirely in M-5’s digital hands. When things go wrong, Kirk and his crew struggle to regain control as M-5 begins to operate independently, with catastrophic results.

Compliance Lesson: Too often, organizations are tempted to turn complex decisions over to AI, assuming that algorithms can “do it all.” But “The Ultimate Computer” makes one fact clear: even the smartest AI requires ongoing, independent human oversight. Without it, errors go unchecked and responsibility becomes dangerously diffuse.

Corporate boards, executives, and compliance officers must ensure that all AI systems, especially those with critical business or safety functions, are subject to robust oversight. This includes clearly defined roles for monitoring, intervention, and (crucially) the ability to override the machine. Establish an AI governance framework that requires periodic human review, real-time tracking, and escalation procedures for intervention. Always preserve the “off switch.”

Lesson 2: Understand Your AI—Transparency and Explainability Are Non-Negotiable

Illustrated By: As M-5 takes control, it makes a series of decisions that the crew can’t understand. When the computer begins attacking other ships during a training exercise, killing crew members in the process, no one knows why, because M-5’s reasoning is a black box even to its creator, Daystrom.

Compliance Lesson: AI systems, especially those built with deep learning or complex algorithms, can be notoriously opaque. If even your developers can’t explain how decisions are made, you’re courting disaster. “The Ultimate Computer” demonstrates the dangers of unexplainable AI: when the stakes are high, opacity erodes trust and prevents timely intervention.

Modern AI governance must demand explainability and transparency, particularly for systems that make or recommend decisions in compliance, risk, HR, or other regulated domains. You must be able to audit, understand, and document how your AI reaches its conclusions. Mandate that all critical AI deployments include documentation of model logic, data sources, and decision-making pathways. Require “explainable AI” solutions for high-risk use cases, and build audit trails for regulatory scrutiny.

Lesson 3: Build in Ethics from the Start—Programming Without Principles is Perilous

Illustrated by Daystrom, who uploads his engrams—his personality and values—into M-5, believing that this will imbue the AI with human ethics. But he fails to account for his unresolved traumas and emotional instability, which are replicated and magnified by M-5, leading to dangerous, unethical decisions.

Compliance Lesson: AI reflects not just the data it’s trained on, but the biases and blind spots of its creators. If you fail to embed clear ethical guidelines, guardrails, and values into your systems from the beginning, you risk unleashing “rogue AI” that optimizes for the wrong outcomes or perpetuates bias at scale.

AI governance is not just a technical challenge; rather, it is an ethical mandate. Involve compliance, legal, DEI, and other stakeholders in the design phase to ensure your systems align with your organization’s values and regulatory obligations. Establish cross-functional AI ethics committees to review training data, test for bias, and define the acceptable uses and limitations of AI. Document decisions and revisit them regularly as your business and regulatory landscape evolve.

Lesson 4: Test and Validate Continuously—Don’t Assume, Verify

Illustrated By: Before full deployment, M-5 is tested only in limited scenarios. When exposed to the complexity and unpredictability of real-space maneuvers, the system’s flaws become evident only after it’s too late. The lack of ongoing testing and validation costs lives and nearly destroys the Enterprise.

Compliance Lesson: No AI system should be considered “finished” on launch day. The real world is infinitely complex and ever-changing, and AI systems can degrade, drift, or encounter unanticipated circumstances. “Set it and forget it” is not an option in AI governance.

Organizations must commit to ongoing validation, testing, and recalibration of all critical AI systems to ensure their reliability and effectiveness. This includes stress-testing under simulated “edge cases” and periodic audits against evolving compliance and risk standards. Develop a continuous monitoring and testing protocol for AI, including regular scenario-based drills, compliance checks, and real-world audits to ensure adequate oversight. Implement “red team” exercises to identify vulnerabilities and unintended consequences.

Lesson 5: Assign Clear Responsibility—Accountability Can’t Be Delegated to a Machine

Illustrated By: As M-5’s rampage escalates, command responsibility is unclear. Daystrom blames the system, the system blames its programming, and the Starfleet brass threatens to destroy the Enterprise. Ultimately, it falls to Kirk to reassert human command and take responsibility for the ship’s fate.

Compliance Lesson: AI is a tool, not a scapegoat. Assigning accountability to a system erodes trust and undermines compliance. In the end, someone must always be responsible for decisions made “by the computer.” Regulators, investors, and the public will not accept “the algorithm did it” as a defense.

Every AI deployment must have designated human owners—individuals or teams empowered (and required) to monitor, question, and take responsibility for outcomes. Define roles and responsibilities for AI oversight in policies and procedures. Assign an accountable executive (“AI owner”) for each critical system and ensure they have the necessary authority and training to perform their duties effectively.

Final ComplianceLog Reflections

The Ultimate Computer” ends with Kirk reclaiming command, but not before costly lessons are learned. For today’s compliance and governance professionals, the message is clear: you can’t outsource accountability, ethics, or oversight to a machine. As AI reshapes our organizations, we must lead with principles and prepare for the unexpected.

AI may be the “ultimate computer,” but governance remains the ultimate human challenge. As you chart your course through this new frontier, let the lessons of Star Trek remind you: the best technology serves humanity, not the other way around.

Resources:

Excruciatingly Detailed Plot Summary by Eric W. Weisstein

MissionLogPodcast.com

Memory Alpha

Categories
Blog

The Compliance Guide to Designed Intelligence: Part 2 – Rethinking Governance for the Age of AI

Yesterday, I began a two-part review of the article “What Is a Designed Intelligence Environment?” in which authors Michael Schrage and David Kiron examine how enterprises must rethink their intelligence and compliance strategies to survive and thrive in the new world of AI-rich operations. I found their insights for compliance professionals both practical and transformative. Previously, we considered what is Designed Intelligence. Tomorrow, we take a deeper dive into what it means for compliance.

For decades, we have approached compliance through policies, procedures, and periodic reviews, trusting that careful planning and diligent oversight would guide us through the challenges of regulatory change and operational risk. However, the rise of artificial intelligence has forever altered this equation. Now, the decisions that shape our organizations are made not just by people, but by increasingly autonomous machines and systems that learn, adapt, and interact in ways that can outpace human comprehension.

This new reality demands a new approach to compliance, one that goes beyond enforcing existing rules and begins to architect the very environments in which human and machine intelligence operate. The article “What Is a Designed Intelligence Environment? ” offers a timely and robust framework for this challenge. Rather than treat AI as just another tool in the compliance toolbox, it urges us to rethink how knowledge, reasoning, and governance are structured across the enterprise. For the compliance professional, this shift is as profound as it is practical: our mission is no longer to control risk but to orchestrate intelligence itself.

Five Key Takeaways for the Compliance Professional

1. Observability Over Prediction: Embrace Real-Time Monitoring

Traditional compliance programs often rely on the classic cycle of predict, plan, execute, and measure. However, as the article emphasizes, Stephen Wolfram’s principle of computational irreducibility suggests that in highly complex, AI-rich environments, outcomes cannot be predicted; they must be observed as they occur. This is not a theoretical point; rather, it is a practical call to action for compliance.

In a world where both human and machine agents make critical decisions, compliance leaders need to build systems that provide real-time visibility into these interactions. The case of the pharmaceutical R&D pipeline illustrates this vividly: instead of forcing premature rankings of drug candidates, the company built a computational observatory, allowing emergent patterns to drive decision-making. For compliance, this means investing in tools and processes that enable continuous monitoring, immediate detection of anomalies, and dynamic feedback loops, moving from static after-the-fact audits to active, ongoing oversight.

2. Semantic Formalization: Make Compliance Computable

If your compliance program still relies on lengthy policy manuals and inconsistent training, it’s time to elevate it. The article introduces the concept of semantic formalization, defining key business and compliance concepts in a manner that enables both humans and machines to execute and reason with them. This isn’t just data management; it’s about ensuring every stakeholder and system shares a common, computable language for compliance.

For example, a multinational retailer struggling with customer experience (CX) consistency turned things around by building a semantic kernel, a shared ontology for complaints, resolutions, and metrics. Compliance teams must similarly formalize definitions for key terms, including risk, conflict of interest, and reporting obligations. This creates a foundation where both human and AI agents can interpret and act on compliance requirements, ensuring consistency, auditability, and scalability.

3. Translate Between Multiple Realities

Every department, human expert, and AI system in your organization “computes” reality differently. Financial models assess risk through simulations, operations utilize failure analysis, and AI identifies statistical correlations. The article’s exploration of real space, the idea that these are not just different perspectives but fundamentally different computational rule sets, changes the compliance game.

Instead of forcing alignment through top-down mandates, compliance officers must become expert translators and orchestrators of change. The aerospace design review case proves the point: rather than punishing disagreement between engineers and AI, leadership created a real mediator, mapping and reconciling the underlying rules of each party. Compliance professionals should develop frameworks and protocols to make these internal logics explicit, resolve conflicts, and coordinate decision-making without imposing artificial consensus.

4. Do Not Simply Deploy Smarter Tools, But Architect Intelligence Environments

Throwing advanced AI or analytics at compliance problems is not enough. The article argues forcefully that intelligence, whether human or machine, must be designed into the very infrastructure of the enterprise. Most organizations still treat intelligence as an emergent property of tools, rather than an intentional product of environment design.

For compliance, this means working proactively with IT, legal, and operational leaders to design systems where intelligence (learning, reasoning, and adaptation) is orchestrated by default. Real-time observability, semantic formalization, and rule-based mediation must be built into the core of your compliance framework, not added as afterthoughts. This approach enables faster, higher-quality decisions, reduces systemic risk, and enhances organizational agility.

5. From Enforcer to Orchestrator: Redefine the Compliance Role

The most important takeaway is the redefinition of what it means to be a compliance professional in the era of AI. The future of compliance is not just about enforcing standards and conducting audits; it is about orchestrating intelligence across human and machine systems. This means guiding the translation between different rules and perspectives, architecting environments for safe collaboration, and ensuring ethical execution in a world of real-time, adaptive agents.

Compliance officers must expand their skill sets by learning the basics of AI, systems engineering, and data science, developing fluency in semantic modeling, and building cross-functional relationships with technology and business leaders. By leading the design of intelligence environments, compliance professionals can become strategic partners in innovation, not just gatekeepers of risk.

As we enter a new era defined by AI, the compliance profession finds itself at a crossroads. The systems we govern are no longer straightforward, linear, or purely human—they are dynamic, adaptive, and built from the collaboration between people and machines. The article “What Is a Designed Intelligence Environment? ” makes clear that our old tools—checklists, policy manuals, and after-the-fact audits—are no longer sufficient for the task ahead. Instead, we must build environments where intelligence itself is orchestrated, monitored, and governed by design.

This transformation is not about abandoning the core values of compliance, integrity, transparency, and accountability; it is about embracing new methods to uphold them in a complex world. We must shift from prediction to observability, from description to formalization, and from enforcement to orchestration. We must learn to translate and mediate between diverse ways of thinking and design infrastructures that enable human and machine intelligence to flourish safely and ethically.

Categories
Compliance Tip of the Day

Compliance Tip of the Day – Rethinking Corporate AI Governance Through Design Intelligence

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements. Whether you’re a seasoned compliance professional or just starting your journey, our aim is to provide you with bite-sized, actionable tips to help you stay on top of your compliance game. Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law. Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

Today we consider how enterprises must rethink their compliance strategies to survive and thrive in the new world of AI-rich operations.

For more on this topic, check out The Compliance Handbook, a Guide to Operationalizing your Compliance Program, 6th edition which was recently released by LexisNexis. It is available here.

Categories
Blog

The Compliance Guide to Designed Intelligence: Part 1 – Rethinking Governance for the Age of AI

If there is one constant in the world of compliance, it is the reality of change. However, in 2025, change takes on a new vector: artificial intelligence, not just as a tool, but as a force reshaping how organizations think, decide, and act. In their article “What Is a Designed Intelligence Environment?” authors Michael Schrage and David Kiron examined how enterprises must rethink their intelligence and compliance strategies to survive and thrive in the new world of AI-rich operations. I found their insights for compliance professionals both practical and transformative. Today, I begin a short two-part blog post series on Designed Intelligence. Today, in Part 1, we consider what is meant by Designed Intelligence. Tomorrow, we take a deeper dive into what it means for compliance.

From Managing Compliance to Orchestrating Intelligence

Traditional compliance frameworks have always focused on managing risk, enforcing controls, and responding to regulatory shifts. But what happens when decision-making itself is no longer exclusively human? In a designed intelligence environment, humans and machines learn, reason, adapt, and improve together. This is not simply the automation of existing workflows; it’s the emergence of a new kind of enterprise, where “epistemic engineering”—the design of how knowledge is generated, shared, and executed—becomes the bedrock of effective compliance.

The first insight for compliance professionals is that we can no longer assume governance is solely about drawing lines around human behavior. Our job is to architect environments in which both human and machine intelligences operate responsibly and transparently, ensuring that knowledge, decisions, and accountability flow where they are needed most.

Computational Irreducibility: The End of Predictive Planning

Stephen Wolfram’s principle of computational irreducibility may sound academic, but its implications are anything but theoretical for compliance leaders. In a nutshell, this principle holds that in highly complex systems, such as those created when humans and AI interact, the future cannot be predicted without running the system in real-time. In other words, the classic compliance cycle of “predict, plan, execute, and measure” is mathematically impossible in many AI-rich contexts.

For compliance professionals, this means shifting from static policy planning to dynamic, real-time oversight. Consider an example from pharmaceutical R&D. A global company faced paralysis in prioritizing compounds for its oncology pipeline. Instead of relying on fixed rankings or endless meetings, leadership created a computational observatory: multiple agentic models simultaneously analyzed each compound from different perspectives (biological plausibility, market readiness, synthetic feasibility)—cross-model consensus and visualization, rather than managerial heuristics, guided decisions, surfacing previously hidden breakthroughs.

Compliance Lesson: Build for Observability, Not Just Control

In today’s world, compliance cannot rely solely on auditing after the fact. The future lies in building observability into the core of decision environments: real-time monitoring, feedback loops, and experimental frameworks that enable compliance to identify emergent risks as they arise, not just when it’s too late. This is the heart of “runtime intelligence.”

Semantic Formalization: Making Compliance Computable

Most compliance programs are based on documentation, training, and knowledge management. But semantic formalization, another key concept, goes much further. It requires organizations to define core business concepts (like “customer value,” “operational risk,” or “conflict of interest”) so precisely that both humans and AI agents can “compute” with them. This is not a matter of semantics for its own sake; it is about ensuring that rules, policies, and standards are unambiguously actionable by both people and machines.

For example, a multinational retailer’s use of large language models (LLMs) for customer support faced breakdowns because definitions of customer experience (CX) varied by region and role. By creating a semantic kernel, which is an enterprise ontology that maps complaints, resolution pathways, sentiment clusters, and CX metrics, the company trained its models (and its people) to reason with consistent, computable definitions. This enabled root-cause analysis and adaptive, system-wide learning that wasn’t possible in the old script-driven model.

Compliance Lesson: Define, Don’t Just Describe

Compliance teams must become architects of semantic infrastructure. That means working cross-functionally to formally define compliance concepts, risks, and obligations so that every AI, dashboard, and human team member speaks the same language, in the same way, everywhere. This is how you build “reasoning standardization” and reduce the friction, ambiguity, and risk that come with AI-driven scale.

Rulial Space: Translating Between Multiple Realities

Perhaps the most disruptive insight for compliance comes from the concept of rule-based space: the recognition that different “intelligences”—whether human teams, AI systems, or even other departments—operate under distinct rule sets, generating unique realities. Finance assesses risk through Monte Carlo simulations, operations analyze it through failure mode analysis, and AI identifies it through statistical correlations. Traditional efforts to force alignment through training or incentives may be fundamentally flawed. What is needed is translation, not assimilation.

In aerospace manufacturing, for example, friction between design engineers and LLMs led to productivity-killing standoffs. Instead of forcing one side to conform to the other, leadership installed an honest mediator: an explicit layer for mapping, negotiating, and reconciling the assumptions, rules, and heuristics of both human and AI systems. This moved the organization from “compliance by enforcement” to “compliance by comprehension,” a far more powerful and sustainable model for managing both risk and innovation.

Compliance Lesson: Become a Translator, Not Just an Enforcer

The future of compliance is not just about enforcing standards but about building systems and processes that can explicitly map and translate between different rule sets: human, machine, and hybrid. This requires cognitive compilers: protocols and infrastructure for negotiating meaning, resolving conflicts, and arbitrating outputs across diverse intelligences. The result is intelligent orchestration of more innovative, safer, and more adaptive enterprises.

Why Smarter Tools Aren’t Enough: Compliance by Design, Not Just Technology

It’s tempting to think that more innovative tools or more sophisticated AI models will solve all compliance challenges. But as the article warns, deploying intelligence as automation—without rethinking the architecture of decision environments—will leave most enterprises stuck with mediocre results. Intelligence, whether human or machine, must be designed into the very infrastructure of the organization: how decisions are made, how meaning is generated, and how value and risk are understood.

For compliance professionals, this means a dramatic expansion of your remit. You must help design the runtime environment for intelligence where learning, adaptation, and ethical execution are embedded, not bolted on. This requires technical fluency, cross-disciplinary collaboration, and a willingness to challenge the old boundaries of policy, training, and audit.

Conclusion: The Compliance Opportunity in Designed Intelligence

The transition to designed intelligence environments represents both a challenge and a once-in-a-generation opportunity for compliance leaders. Those who lean in, who help architect real-time observability, semantic formalization, and rule-based mediation, will become essential strategic partners in their organizations’ transformation. Those who don’t risk being left behind by systems they can neither see, steer, nor secure.

The era of “predict and control” is coming to an end. The age of “orchestrate and observe” is here. As compliance professionals, our calling is clear: to lead the design, governance, and stewardship of intelligence environments that are fit for the complexity and promise of AI. Only then can we ensure that innovation and integrity go hand in hand in the enterprises of tomorrow.

Join us tomorrow for Part 2, where we delve deeper into the compliance considerations.

Categories
FCPA Compliance Report

#Risk New York Speaker Series – The Future of AI Governance in GRC with Matt Kelly

Join Tom Fox and hundreds of other GRC professionals in the city that never sleeps, New York City, on July 9 & 10 for one of the top conferences around, #Risk New York. The current US landscape, shaped by evolving policies, rapid advancements in AI, and shifting global dynamics, demands adaptive strategies and cross-functional collaboration.

At #RISK New York, you will master the New Regulatory Reality by getting ahead of US regulatory shifts and their impact. Conquer AI and Tech Risk by Safeguarding Your Organization in an AI-Driven World and Understanding the Implications of Major Tech Investments. Navigate Financial and Crypto Volatility by Protecting Your Assets and Exploring Solutions in a Dynamic Market. Strengthen Your GRC Framework by Leveraging Governance, Risk, and Compliance for Strategic Advantage. Protect Digital Trust by addressing challenges in cybersecurity and data privacy, and combating misinformation. All while meeting with the country’s top #Risk management professionals.

In this episode, Tom Fox talks with Matt Kelly about his presentation on the importance of understanding how AI can be productively adopted within enterprises, as well as the ethical challenges it presents, including discrimination and data validity. Matt also discusses the importance of AI governance and offers a preview of his upcoming presentation on this topic. Matt expresses his eagerness to engage with other GRC professionals at the forthcoming conference to exchange ideas and discuss emerging risks in third-party and vendor risk management.

Resources:

#Risk Conference Series

#RiskNYC—Tickets and Information

Matt Kelly on LinkedIn

Categories
Innovation in Compliance

Innovation in Compliance – Navigating AI Governance in 2025 with Christine Uri

Innovation comes in many forms, and compliance professionals need to be ready for it and embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, host Tom Fox welcomes Christine Uri to discuss her insights and experiences in AI governance.

Christine shares her extensive background as a legal executive and outlines her current work in advising general counsels on governance and sustainability issues at her consulting firm, CURI Insights. Christine emphasizes the importance of a cross-functional committee to oversee AI governance and highlights AI technology’s rapid evolution and inherent risks. The episode also covers the implications of the EU AI Act, the urgency of building AI literacy, and the challenges of managing AI risks in a dynamic regulatory landscape. As AI continues to evolve at a breakneck pace, Christine offers practical advice on how companies can keep up and ensure robust governance frameworks are in place to mitigate risks.

 

Key highlights:

  • AI Governance and Compliance
  • AI Governance in 2025
  • EU AI Act and Its Implications
  • Building AI Literacy in Compliance
  • Future of AI and Compliance

Resources:

Christine Uri on LinkedIn

Allie K Miller

Luiza Jarvosky

Hard Fork podcast

CURI Insights

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Compliance Tip of the Day

Compliance Tip of the Day: AI Governance Framework

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements.

Whether you’re a seasoned compliance professional or just starting your journey, our aim is to provide you with bite-sized, actionable tips to help you stay on top of your compliance game.

Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law.

Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

In today’s episode, we begin a weeklong look at some of the ways generative AI is changing compliance and risk management. Today, we consider how to approach a comprehensive AI governance framework.

For more information on the Ethico ROI Calculator and a free White Paper on the ROI of Compliance, click here.