Categories
Blog

The Compliance Guide to Designed Intelligence: Part 1 – Rethinking Governance for the Age of AI

If there is one constant in the world of compliance, it is the reality of change. However, in 2025, change takes on a new vector: artificial intelligence, not just as a tool, but as a force reshaping how organizations think, decide, and act. In their article “What Is a Designed Intelligence Environment?” authors Michael Schrage and David Kiron examined how enterprises must rethink their intelligence and compliance strategies to survive and thrive in the new world of AI-rich operations. I found their insights for compliance professionals both practical and transformative. Today, I begin a short two-part blog post series on Designed Intelligence. Today, in Part 1, we consider what is meant by Designed Intelligence. Tomorrow, we take a deeper dive into what it means for compliance.

From Managing Compliance to Orchestrating Intelligence

Traditional compliance frameworks have always focused on managing risk, enforcing controls, and responding to regulatory shifts. But what happens when decision-making itself is no longer exclusively human? In a designed intelligence environment, humans and machines learn, reason, adapt, and improve together. This is not simply the automation of existing workflows; it’s the emergence of a new kind of enterprise, where “epistemic engineering”—the design of how knowledge is generated, shared, and executed—becomes the bedrock of effective compliance.

The first insight for compliance professionals is that we can no longer assume governance is solely about drawing lines around human behavior. Our job is to architect environments in which both human and machine intelligences operate responsibly and transparently, ensuring that knowledge, decisions, and accountability flow where they are needed most.

Computational Irreducibility: The End of Predictive Planning

Stephen Wolfram’s principle of computational irreducibility may sound academic, but its implications are anything but theoretical for compliance leaders. In a nutshell, this principle holds that in highly complex systems, such as those created when humans and AI interact, the future cannot be predicted without running the system in real-time. In other words, the classic compliance cycle of “predict, plan, execute, and measure” is mathematically impossible in many AI-rich contexts.

For compliance professionals, this means shifting from static policy planning to dynamic, real-time oversight. Consider an example from pharmaceutical R&D. A global company faced paralysis in prioritizing compounds for its oncology pipeline. Instead of relying on fixed rankings or endless meetings, leadership created a computational observatory: multiple agentic models simultaneously analyzed each compound from different perspectives (biological plausibility, market readiness, synthetic feasibility)—cross-model consensus and visualization, rather than managerial heuristics, guided decisions, surfacing previously hidden breakthroughs.

Compliance Lesson: Build for Observability, Not Just Control

In today’s world, compliance cannot rely solely on auditing after the fact. The future lies in building observability into the core of decision environments: real-time monitoring, feedback loops, and experimental frameworks that enable compliance to identify emergent risks as they arise, not just when it’s too late. This is the heart of “runtime intelligence.”

Semantic Formalization: Making Compliance Computable

Most compliance programs are based on documentation, training, and knowledge management. But semantic formalization, another key concept, goes much further. It requires organizations to define core business concepts (like “customer value,” “operational risk,” or “conflict of interest”) so precisely that both humans and AI agents can “compute” with them. This is not a matter of semantics for its own sake; it is about ensuring that rules, policies, and standards are unambiguously actionable by both people and machines.

For example, a multinational retailer’s use of large language models (LLMs) for customer support faced breakdowns because definitions of customer experience (CX) varied by region and role. By creating a semantic kernel, which is an enterprise ontology that maps complaints, resolution pathways, sentiment clusters, and CX metrics, the company trained its models (and its people) to reason with consistent, computable definitions. This enabled root-cause analysis and adaptive, system-wide learning that wasn’t possible in the old script-driven model.

Compliance Lesson: Define, Don’t Just Describe

Compliance teams must become architects of semantic infrastructure. That means working cross-functionally to formally define compliance concepts, risks, and obligations so that every AI, dashboard, and human team member speaks the same language, in the same way, everywhere. This is how you build “reasoning standardization” and reduce the friction, ambiguity, and risk that come with AI-driven scale.

Rulial Space: Translating Between Multiple Realities

Perhaps the most disruptive insight for compliance comes from the concept of rule-based space: the recognition that different “intelligences”—whether human teams, AI systems, or even other departments—operate under distinct rule sets, generating unique realities. Finance assesses risk through Monte Carlo simulations, operations analyze it through failure mode analysis, and AI identifies it through statistical correlations. Traditional efforts to force alignment through training or incentives may be fundamentally flawed. What is needed is translation, not assimilation.

In aerospace manufacturing, for example, friction between design engineers and LLMs led to productivity-killing standoffs. Instead of forcing one side to conform to the other, leadership installed an honest mediator: an explicit layer for mapping, negotiating, and reconciling the assumptions, rules, and heuristics of both human and AI systems. This moved the organization from “compliance by enforcement” to “compliance by comprehension,” a far more powerful and sustainable model for managing both risk and innovation.

Compliance Lesson: Become a Translator, Not Just an Enforcer

The future of compliance is not just about enforcing standards but about building systems and processes that can explicitly map and translate between different rule sets: human, machine, and hybrid. This requires cognitive compilers: protocols and infrastructure for negotiating meaning, resolving conflicts, and arbitrating outputs across diverse intelligences. The result is intelligent orchestration of more innovative, safer, and more adaptive enterprises.

Why Smarter Tools Aren’t Enough: Compliance by Design, Not Just Technology

It’s tempting to think that more innovative tools or more sophisticated AI models will solve all compliance challenges. But as the article warns, deploying intelligence as automation—without rethinking the architecture of decision environments—will leave most enterprises stuck with mediocre results. Intelligence, whether human or machine, must be designed into the very infrastructure of the organization: how decisions are made, how meaning is generated, and how value and risk are understood.

For compliance professionals, this means a dramatic expansion of your remit. You must help design the runtime environment for intelligence where learning, adaptation, and ethical execution are embedded, not bolted on. This requires technical fluency, cross-disciplinary collaboration, and a willingness to challenge the old boundaries of policy, training, and audit.

Conclusion: The Compliance Opportunity in Designed Intelligence

The transition to designed intelligence environments represents both a challenge and a once-in-a-generation opportunity for compliance leaders. Those who lean in, who help architect real-time observability, semantic formalization, and rule-based mediation, will become essential strategic partners in their organizations’ transformation. Those who don’t risk being left behind by systems they can neither see, steer, nor secure.

The era of “predict and control” is coming to an end. The age of “orchestrate and observe” is here. As compliance professionals, our calling is clear: to lead the design, governance, and stewardship of intelligence environments that are fit for the complexity and promise of AI. Only then can we ensure that innovation and integrity go hand in hand in the enterprises of tomorrow.

Join us tomorrow for Part 2, where we delve deeper into the compliance considerations.

Categories
FCPA Compliance Report

#Risk New York Speaker Series – The Future of AI Governance in GRC with Matt Kelly

Join Tom Fox and hundreds of other GRC professionals in the city that never sleeps, New York City, on July 9 & 10 for one of the top conferences around, #Risk New York. The current US landscape, shaped by evolving policies, rapid advancements in AI, and shifting global dynamics, demands adaptive strategies and cross-functional collaboration.

At #RISK New York, you will master the New Regulatory Reality by getting ahead of US regulatory shifts and their impact. Conquer AI and Tech Risk by Safeguarding Your Organization in an AI-Driven World and Understanding the Implications of Major Tech Investments. Navigate Financial and Crypto Volatility by Protecting Your Assets and Exploring Solutions in a Dynamic Market. Strengthen Your GRC Framework by Leveraging Governance, Risk, and Compliance for Strategic Advantage. Protect Digital Trust by addressing challenges in cybersecurity and data privacy, and combating misinformation. All while meeting with the country’s top #Risk management professionals.

In this episode, Tom Fox talks with Matt Kelly about his presentation on the importance of understanding how AI can be productively adopted within enterprises, as well as the ethical challenges it presents, including discrimination and data validity. Matt also discusses the importance of AI governance and offers a preview of his upcoming presentation on this topic. Matt expresses his eagerness to engage with other GRC professionals at the forthcoming conference to exchange ideas and discuss emerging risks in third-party and vendor risk management.

Resources:

#Risk Conference Series

#RiskNYC—Tickets and Information

Matt Kelly on LinkedIn

Categories
Innovation in Compliance

Innovation in Compliance – Navigating AI Governance in 2025 with Christine Uri

Innovation comes in many forms, and compliance professionals need to be ready for it and embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, host Tom Fox welcomes Christine Uri to discuss her insights and experiences in AI governance.

Christine shares her extensive background as a legal executive and outlines her current work in advising general counsels on governance and sustainability issues at her consulting firm, CURI Insights. Christine emphasizes the importance of a cross-functional committee to oversee AI governance and highlights AI technology’s rapid evolution and inherent risks. The episode also covers the implications of the EU AI Act, the urgency of building AI literacy, and the challenges of managing AI risks in a dynamic regulatory landscape. As AI continues to evolve at a breakneck pace, Christine offers practical advice on how companies can keep up and ensure robust governance frameworks are in place to mitigate risks.

 

Key highlights:

  • AI Governance and Compliance
  • AI Governance in 2025
  • EU AI Act and Its Implications
  • Building AI Literacy in Compliance
  • Future of AI and Compliance

Resources:

Christine Uri on LinkedIn

Allie K Miller

Luiza Jarvosky

Hard Fork podcast

CURI Insights

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Compliance Tip of the Day

Compliance Tip of the Day: AI Governance Framework

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements.

Whether you’re a seasoned compliance professional or just starting your journey, our aim is to provide you with bite-sized, actionable tips to help you stay on top of your compliance game.

Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law.

Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

In today’s episode, we begin a weeklong look at some of the ways generative AI is changing compliance and risk management. Today, we consider how to approach a comprehensive AI governance framework.

For more information on the Ethico ROI Calculator and a free White Paper on the ROI of Compliance, click here.

Categories
Blog

AI in Compliance Week: Part 2 – A Comprehensive Governance Approach

We continue our weeklong exploration of issues related to using Generative AI in compliance by examining some AI governance issues. In the rapidly evolving landscape of AI, the importance of robust governance frameworks cannot be overstated. The need for comprehensive governance structures to ensure compliance, ethical alignment, and trustworthiness has become paramount as AI systems become increasingly integrated into compliance. Today, we will consider the critical areas of compliance governance and ethics governance and present a holistic approach to mitigating the risks associated with these issues.

MIA AI Governance: The Problems

Missing compliance governance can have far-reaching consequences, undermining the integrity of an entire AI-driven initiative. Businesses must ensure alignment with enterprise-wide governance, compliance, and control (GRC) frameworks. This includes aligning with model risk management practices and embedding robust compliance checks throughout the AI model lifecycle. By promoting awareness of how the AI model works at your organization, you can minimize information asymmetries between development teams, users, and target audiences, fostering a culture of transparency and accountability.

The lack of ethical governance can lead to misalignment with an organization’s values, brand identity, or social responsibility. The answer is that companies should develop comprehensive AI ethics governance methods, including defining ethical principles, establishing an AI ethics review board, and creating a compliance program that addresses ethical concerns. Adopting frameworks like Ethically Aligned AI Design (EAAID) can help integrate ethical considerations into the design process while incorporating AI governance benchmarks beyond traditional measurements to encompass social and moral accountability.

Another outcome of the lack of trustworthy or responsible AI governance can result in unintentional and significant damage. To address this, compliance professionals should help develop accountable and trustworthy AI governance methods that augment enterprise-wide GRC structures. This can include establishing a committee such as an AI Advancement Council or similar structure in your company to oversee mission priorities and strategic AI advancement planning, collaborating with service line leaders and program offices to align with ethical AI guidelines and practices, and developing compliance programs to guide conformance with ethical AI principles and relevant legislation. Finally, implementing AI-independent verification and validation processes can help identify and manage unintentional outcomes.

The Solution

By addressing the critical areas of compliance governance and ethics governance through a more holistic approach, businesses can create a comprehensive framework that mitigates the risks associated with the absence of these crucial elements. This approach ensures that AI systems comply with relevant regulations and standards and align with your company’s values, ethical principles, and the pursuit of trustworthy and responsible AI. As the AI landscape evolves, this comprehensive governance framework will be essential in navigating the complexities and safeguarding the integrity of AI-driven initiatives.

Here are some key steps compliance professionals and businesses can think through to facilitate AI governance in your company:

  1. Establish a Centralized AI Governance Body:
    • Create an AI Governance Council that oversees your organization’s AI strategy, policies, and practices.
    • Ensure the council includes representatives from various stakeholder groups, such as legal, compliance, ethics, risk management, IT, and other subject matter experts.
    • Empower the council to develop and enforce AI governance frameworks, guidelines, and processes.
  2. Conduct AI Risk Assessments:
    • Identify and assess the risks associated with the organization’s AI initiatives, including compliance, ethical, and other compliance-related risks.
    • Prioritize the risks based on their potential impact and likelihood of occurrence.
    • Develop mitigation strategies and action plans to address the identified risks.
  3. Align AI Governance with Enterprise-wide Frameworks:
    • Ensure the AI governance framework is integrated with the organization’s existing GRC and Risk Management processes.
    • Establish clear lines of accountability and responsibility for AI-related activities across the organization.
    • Integrate AI governance into the organization’s broader risk management and compliance programs.
  4. Implement Compliance Governance Processes:
    • Develop and enforce AI-specific compliance controls, policies, and procedures.
    • Embed compliance checks throughout the AI model lifecycle, from development to deployment and monitoring.
    • Provide training and awareness programs to educate employees on AI compliance requirements.
  5. Establish Ethics Governance Mechanisms:
    • Define the organization’s AI ethics principles, values, and code of conduct.
    • Create an AI Ethics Review Board to assess and monitor the ethical implications of AI initiatives.
    • Implement processes for ethical AI design, such as the Ethically Aligned AI Design methodology.
    • Incorporate ethical AI benchmarks and accountability measures into the organization’s performance management and reporting processes.
  6. Implement Reliance-Related Governance:
    • Develop responsible and trustworthy AI governance practices that align with the organization’s enterprise-wide GRC frameworks.
    • Establish an AI Advancement Council to oversee strategic AI planning and alignment with ethical guidelines.
    • Implement AI-independent verification and validation processes to identify and manage unintended outcomes.
    • Provide comprehensive training and awareness programs on AI risk management for employees, contractors, and other stakeholders.
  7. Foster a Culture of AI Governance:
    • Promote a culture of accountability, transparency, and continuous improvement around AI governance.
    • Encourage cross-functional collaboration and communication to address AI-related challenges and opportunities.
    • Review and update the AI governance framework regularly to adapt to evolving regulatory requirements, technological advancements, and organizational needs.

By following these steps, organizations can implement a comprehensive governance framework that addresses compliance, ethics, and reliance-related governance. This framework enables organizations to harness the power of AI while mitigating the associated risks. 

AI Governance Resources

There are several notable resources the compliance professional can tap into around this issue of AI governance practices. The Partnership on AI Partnership on AI is a multi-stakeholder coalition of leading technology companies, academic institutions, and nonprofit organizations. It has been at the forefront of developing best practices and guidelines for the responsible development and deployment of AI systems. It has published influential reports and frameworks, such as the Tenets of Responsible AI and the Model Cards for Model Reporting, which have been widely adopted across the industry.

The Algorithmic Justice League (ALJ) is a nonprofit organization dedicated to raising awareness about AI’s social implications and advocating algorithmic justice. It has developed initiatives such as the Algorithmic Bias Bounty Program, encouraging researchers and developers to identify and report biases in AI systems. The AJL has highlighted the importance of addressing algorithmic bias and discrimination in AI.

IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems is a multidisciplinary effort to develop standards, guidelines, and best practices for the ethical design, development, and deployment of autonomous and intelligent systems. It has produced key documents and reports, such as the Ethically Aligned Design framework, which guides the incorporation of ethical considerations into AI development.

The AI Ethics & Governance Roundtable is an initiative led by the University of Cambridge’s Leverhulme Centre for the Future of Intelligence. It brings together industry, academia, and policymaking experts to discuss emerging issues, share best practices, and develop collaborative solutions for AI governance. The roundtable’s insights and recommendations have influenced AI governance frameworks and policies at the organizational and regulatory levels.

These examples demonstrate the power of industry collaboration in advancing AI governance practices. By pooling resources, expertise, and diverse perspectives, these initiatives have developed comprehensive frameworks, guidelines, and standards being adopted across the AI ecosystem. Compliance professionals should avail themselves of these resources to prepare your company to take the next brave steps in the intersection of compliance, governance, and AI.