Categories
Compliance Tip of the Day

Compliance Tip of the Day – Compliance Responses to Design Intelligence

Welcome to “Compliance Tip of the Day,” the podcast that brings you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements. Whether you’re a seasoned compliance professional or just starting your journey, our goal is to provide you with bite-sized, actionable tips to help you stay ahead in your compliance efforts. Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law. Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

Today, we consider how CCOs and compliance programs need to respond to design intelligence.

For more information on this topic, refer to The Compliance Handbook: A Guide to Operationalizing Your Compliance Program, 6th edition, recently released by LexisNexis. It is available here.

Categories
Blog

The Compliance Guide to Designed Intelligence: Part 2 – Rethinking Governance for the Age of AI

Yesterday, I began a two-part review of the article “What Is a Designed Intelligence Environment?” in which authors Michael Schrage and David Kiron examine how enterprises must rethink their intelligence and compliance strategies to survive and thrive in the new world of AI-rich operations. I found their insights for compliance professionals both practical and transformative. Previously, we considered what is Designed Intelligence. Tomorrow, we take a deeper dive into what it means for compliance.

For decades, we have approached compliance through policies, procedures, and periodic reviews, trusting that careful planning and diligent oversight would guide us through the challenges of regulatory change and operational risk. However, the rise of artificial intelligence has forever altered this equation. Now, the decisions that shape our organizations are made not just by people, but by increasingly autonomous machines and systems that learn, adapt, and interact in ways that can outpace human comprehension.

This new reality demands a new approach to compliance, one that goes beyond enforcing existing rules and begins to architect the very environments in which human and machine intelligence operate. The article “What Is a Designed Intelligence Environment? ” offers a timely and robust framework for this challenge. Rather than treat AI as just another tool in the compliance toolbox, it urges us to rethink how knowledge, reasoning, and governance are structured across the enterprise. For the compliance professional, this shift is as profound as it is practical: our mission is no longer to control risk but to orchestrate intelligence itself.

Five Key Takeaways for the Compliance Professional

1. Observability Over Prediction: Embrace Real-Time Monitoring

Traditional compliance programs often rely on the classic cycle of predict, plan, execute, and measure. However, as the article emphasizes, Stephen Wolfram’s principle of computational irreducibility suggests that in highly complex, AI-rich environments, outcomes cannot be predicted; they must be observed as they occur. This is not a theoretical point; rather, it is a practical call to action for compliance.

In a world where both human and machine agents make critical decisions, compliance leaders need to build systems that provide real-time visibility into these interactions. The case of the pharmaceutical R&D pipeline illustrates this vividly: instead of forcing premature rankings of drug candidates, the company built a computational observatory, allowing emergent patterns to drive decision-making. For compliance, this means investing in tools and processes that enable continuous monitoring, immediate detection of anomalies, and dynamic feedback loops, moving from static after-the-fact audits to active, ongoing oversight.

2. Semantic Formalization: Make Compliance Computable

If your compliance program still relies on lengthy policy manuals and inconsistent training, it’s time to elevate it. The article introduces the concept of semantic formalization, defining key business and compliance concepts in a manner that enables both humans and machines to execute and reason with them. This isn’t just data management; it’s about ensuring every stakeholder and system shares a common, computable language for compliance.

For example, a multinational retailer struggling with customer experience (CX) consistency turned things around by building a semantic kernel, a shared ontology for complaints, resolutions, and metrics. Compliance teams must similarly formalize definitions for key terms, including risk, conflict of interest, and reporting obligations. This creates a foundation where both human and AI agents can interpret and act on compliance requirements, ensuring consistency, auditability, and scalability.

3. Translate Between Multiple Realities

Every department, human expert, and AI system in your organization “computes” reality differently. Financial models assess risk through simulations, operations utilize failure analysis, and AI identifies statistical correlations. The article’s exploration of real space, the idea that these are not just different perspectives but fundamentally different computational rule sets, changes the compliance game.

Instead of forcing alignment through top-down mandates, compliance officers must become expert translators and orchestrators of change. The aerospace design review case proves the point: rather than punishing disagreement between engineers and AI, leadership created a real mediator, mapping and reconciling the underlying rules of each party. Compliance professionals should develop frameworks and protocols to make these internal logics explicit, resolve conflicts, and coordinate decision-making without imposing artificial consensus.

4. Do Not Simply Deploy Smarter Tools, But Architect Intelligence Environments

Throwing advanced AI or analytics at compliance problems is not enough. The article argues forcefully that intelligence, whether human or machine, must be designed into the very infrastructure of the enterprise. Most organizations still treat intelligence as an emergent property of tools, rather than an intentional product of environment design.

For compliance, this means working proactively with IT, legal, and operational leaders to design systems where intelligence (learning, reasoning, and adaptation) is orchestrated by default. Real-time observability, semantic formalization, and rule-based mediation must be built into the core of your compliance framework, not added as afterthoughts. This approach enables faster, higher-quality decisions, reduces systemic risk, and enhances organizational agility.

5. From Enforcer to Orchestrator: Redefine the Compliance Role

The most important takeaway is the redefinition of what it means to be a compliance professional in the era of AI. The future of compliance is not just about enforcing standards and conducting audits; it is about orchestrating intelligence across human and machine systems. This means guiding the translation between different rules and perspectives, architecting environments for safe collaboration, and ensuring ethical execution in a world of real-time, adaptive agents.

Compliance officers must expand their skill sets by learning the basics of AI, systems engineering, and data science, developing fluency in semantic modeling, and building cross-functional relationships with technology and business leaders. By leading the design of intelligence environments, compliance professionals can become strategic partners in innovation, not just gatekeepers of risk.

As we enter a new era defined by AI, the compliance profession finds itself at a crossroads. The systems we govern are no longer straightforward, linear, or purely human—they are dynamic, adaptive, and built from the collaboration between people and machines. The article “What Is a Designed Intelligence Environment? ” makes clear that our old tools—checklists, policy manuals, and after-the-fact audits—are no longer sufficient for the task ahead. Instead, we must build environments where intelligence itself is orchestrated, monitored, and governed by design.

This transformation is not about abandoning the core values of compliance, integrity, transparency, and accountability; it is about embracing new methods to uphold them in a complex world. We must shift from prediction to observability, from description to formalization, and from enforcement to orchestration. We must learn to translate and mediate between diverse ways of thinking and design infrastructures that enable human and machine intelligence to flourish safely and ethically.

Categories
Blog

The Compliance Guide to Designed Intelligence: Part 1 – Rethinking Governance for the Age of AI

If there is one constant in the world of compliance, it is the reality of change. However, in 2025, change takes on a new vector: artificial intelligence, not just as a tool, but as a force reshaping how organizations think, decide, and act. In their article “What Is a Designed Intelligence Environment?” authors Michael Schrage and David Kiron examined how enterprises must rethink their intelligence and compliance strategies to survive and thrive in the new world of AI-rich operations. I found their insights for compliance professionals both practical and transformative. Today, I begin a short two-part blog post series on Designed Intelligence. Today, in Part 1, we consider what is meant by Designed Intelligence. Tomorrow, we take a deeper dive into what it means for compliance.

From Managing Compliance to Orchestrating Intelligence

Traditional compliance frameworks have always focused on managing risk, enforcing controls, and responding to regulatory shifts. But what happens when decision-making itself is no longer exclusively human? In a designed intelligence environment, humans and machines learn, reason, adapt, and improve together. This is not simply the automation of existing workflows; it’s the emergence of a new kind of enterprise, where “epistemic engineering”—the design of how knowledge is generated, shared, and executed—becomes the bedrock of effective compliance.

The first insight for compliance professionals is that we can no longer assume governance is solely about drawing lines around human behavior. Our job is to architect environments in which both human and machine intelligences operate responsibly and transparently, ensuring that knowledge, decisions, and accountability flow where they are needed most.

Computational Irreducibility: The End of Predictive Planning

Stephen Wolfram’s principle of computational irreducibility may sound academic, but its implications are anything but theoretical for compliance leaders. In a nutshell, this principle holds that in highly complex systems, such as those created when humans and AI interact, the future cannot be predicted without running the system in real-time. In other words, the classic compliance cycle of “predict, plan, execute, and measure” is mathematically impossible in many AI-rich contexts.

For compliance professionals, this means shifting from static policy planning to dynamic, real-time oversight. Consider an example from pharmaceutical R&D. A global company faced paralysis in prioritizing compounds for its oncology pipeline. Instead of relying on fixed rankings or endless meetings, leadership created a computational observatory: multiple agentic models simultaneously analyzed each compound from different perspectives (biological plausibility, market readiness, synthetic feasibility)—cross-model consensus and visualization, rather than managerial heuristics, guided decisions, surfacing previously hidden breakthroughs.

Compliance Lesson: Build for Observability, Not Just Control

In today’s world, compliance cannot rely solely on auditing after the fact. The future lies in building observability into the core of decision environments: real-time monitoring, feedback loops, and experimental frameworks that enable compliance to identify emergent risks as they arise, not just when it’s too late. This is the heart of “runtime intelligence.”

Semantic Formalization: Making Compliance Computable

Most compliance programs are based on documentation, training, and knowledge management. But semantic formalization, another key concept, goes much further. It requires organizations to define core business concepts (like “customer value,” “operational risk,” or “conflict of interest”) so precisely that both humans and AI agents can “compute” with them. This is not a matter of semantics for its own sake; it is about ensuring that rules, policies, and standards are unambiguously actionable by both people and machines.

For example, a multinational retailer’s use of large language models (LLMs) for customer support faced breakdowns because definitions of customer experience (CX) varied by region and role. By creating a semantic kernel, which is an enterprise ontology that maps complaints, resolution pathways, sentiment clusters, and CX metrics, the company trained its models (and its people) to reason with consistent, computable definitions. This enabled root-cause analysis and adaptive, system-wide learning that wasn’t possible in the old script-driven model.

Compliance Lesson: Define, Don’t Just Describe

Compliance teams must become architects of semantic infrastructure. That means working cross-functionally to formally define compliance concepts, risks, and obligations so that every AI, dashboard, and human team member speaks the same language, in the same way, everywhere. This is how you build “reasoning standardization” and reduce the friction, ambiguity, and risk that come with AI-driven scale.

Rulial Space: Translating Between Multiple Realities

Perhaps the most disruptive insight for compliance comes from the concept of rule-based space: the recognition that different “intelligences”—whether human teams, AI systems, or even other departments—operate under distinct rule sets, generating unique realities. Finance assesses risk through Monte Carlo simulations, operations analyze it through failure mode analysis, and AI identifies it through statistical correlations. Traditional efforts to force alignment through training or incentives may be fundamentally flawed. What is needed is translation, not assimilation.

In aerospace manufacturing, for example, friction between design engineers and LLMs led to productivity-killing standoffs. Instead of forcing one side to conform to the other, leadership installed an honest mediator: an explicit layer for mapping, negotiating, and reconciling the assumptions, rules, and heuristics of both human and AI systems. This moved the organization from “compliance by enforcement” to “compliance by comprehension,” a far more powerful and sustainable model for managing both risk and innovation.

Compliance Lesson: Become a Translator, Not Just an Enforcer

The future of compliance is not just about enforcing standards but about building systems and processes that can explicitly map and translate between different rule sets: human, machine, and hybrid. This requires cognitive compilers: protocols and infrastructure for negotiating meaning, resolving conflicts, and arbitrating outputs across diverse intelligences. The result is intelligent orchestration of more innovative, safer, and more adaptive enterprises.

Why Smarter Tools Aren’t Enough: Compliance by Design, Not Just Technology

It’s tempting to think that more innovative tools or more sophisticated AI models will solve all compliance challenges. But as the article warns, deploying intelligence as automation—without rethinking the architecture of decision environments—will leave most enterprises stuck with mediocre results. Intelligence, whether human or machine, must be designed into the very infrastructure of the organization: how decisions are made, how meaning is generated, and how value and risk are understood.

For compliance professionals, this means a dramatic expansion of your remit. You must help design the runtime environment for intelligence where learning, adaptation, and ethical execution are embedded, not bolted on. This requires technical fluency, cross-disciplinary collaboration, and a willingness to challenge the old boundaries of policy, training, and audit.

Conclusion: The Compliance Opportunity in Designed Intelligence

The transition to designed intelligence environments represents both a challenge and a once-in-a-generation opportunity for compliance leaders. Those who lean in, who help architect real-time observability, semantic formalization, and rule-based mediation, will become essential strategic partners in their organizations’ transformation. Those who don’t risk being left behind by systems they can neither see, steer, nor secure.

The era of “predict and control” is coming to an end. The age of “orchestrate and observe” is here. As compliance professionals, our calling is clear: to lead the design, governance, and stewardship of intelligence environments that are fit for the complexity and promise of AI. Only then can we ensure that innovation and integrity go hand in hand in the enterprises of tomorrow.

Join us tomorrow for Part 2, where we delve deeper into the compliance considerations.