Categories
Blog

AI and Work Intensification – The Compliance Response

There is a comforting myth circulating in corporate hallways and boardrooms: if we deploy AI across governance, risk, and compliance, the work will shrink. Investigations will move faster. Monitoring will get smarter. Policies will draft themselves. Third-party diligence will become push-button. The compliance function will finally “do more with less.” That myth was challenged in a recent Harvard Business Review article, “AI Doesn’t Reduce Work—It Intensifies It by Aruna Ranganathan and Xingqi Maggie Ye.

The authors believe that what happens is work intensification. AI expands throughput, increases expectations, and generates more outputs that still require human judgment, verification, and accountability. Instead of fewer tasks, you get more tasks. Instead of simpler work, you get faster cycles, more iterations, and new forms of quality risk. For the Chief Compliance Officer (CCO) leading AI governance, this is not a side effect. It is a core operating model issue.

If compliance owns AI governance across the enterprise, compliance must also own the discipline of how humans and AI work together. I call that discipline an AI practice standard, management guidance that sets expectations for pace, quality, verification, escalation, and sustainable workload.

Today, we consider how to consider this issue as a compliance operating model challenge across all GRC workflows: policy management, investigations, hotline intake, monitoring and surveillance, third-party due diligence, regulatory change management, audit planning, training, and reporting. The tone is cautionary because the risk is real: a compliance function that mistakes AI output volume for compliance effectiveness.

The Compliance Operating Model Problem: More Output, More Review, More Risk

Compliance work is not manufacturing. It is judgment work. It requires discretion, context, and defensible decisions. AI can accelerate inputs and draft outputs, but it does not accept responsibility. The CCO does. The business does. The board does. When AI enters GRC workflows, it tends to create four pressure points:

1. Compression of timelines. If a draft can be produced in five minutes, someone will ask why it cannot be finalized in five more.

2. Explosion of options. AI generates multiple versions, scenarios, and recommendations, which expands decision load and review cycles.

3. Higher volume of “signals.” AI-enabled monitoring produces more alerts, more pattern matches, and more anomalies. Much will be noise. All require triage.

4. Illusion of completion. Teams begin to treat a plausible AI answer as a finished work product. That is how quality defects are born.

The result is a compliance function that looks “faster” while becoming more fragile. Burnout rises. Rework increases. Errors creep into documentation. Controls become less reliable because the humans operating them are overwhelmed by the sheer volume AI makes possible.

All this means the question for the CCO is not, “How do we roll out AI?” The question is, “How do we govern the human work that AI intensifies?”

Five KPIs for Work Intensification Risk

Next, we consider five KPIs specifically designed to measure work intensification. These are board-credible, compliance-owned, and operationally measurable.

1. After-Hours Compliance Work Index

Percentage of compliance work activity occurring outside standard business hours (for example, 6 p.m. to 7 a.m.), measured across key systems (case management, GRC platform activity logs, email metadata, collaboration tool usage). This matters because AI compresses timelines and pushes work into nights and weekends. This index serves as an early warning for burnout and quality failures.

2. AI Rework Rate

Percentage of AI-assisted work products requiring material revision after human review (policies, investigation summaries, risk narratives, diligence reports). This matters because

if AI increases speed but doubles rework, you are not gaining productivity. You are shifting effort downstream.

3. Cycle Time Compression vs. Quality Defect Ratio

Track cycle time reductions alongside quality defects (corrections, escalations, documentation gaps, audit findings). You can express this KPI as Cycle Time Improvement / Defect Increase.

This matters because faster is not better if defects rise. This ratio keeps leadership honest.

4. Alert-to-Action Conversion Rate

Percentage of AI-generated alerts that result in a confirmed issue, investigation, remediation, or control enhancement. This matters because AI intensifies monitoring. This KPI exposes whether you are drowning in noise or generating actionable intelligence.

5. Burnout Signal Composite

A quarterly composite score built from pulse surveys such as fatigue, workload, autonomy, attrition in compliance roles, sick leave usage trends, and employee assistance program utilization patterns. This matters because compliance effectiveness depends on people. Burnout is a control failure risk.

These five metrics give the CCO and board a shared view of whether AI is improving the compliance function or simply accelerating it toward exhaustion.

How to Measure the Leading Indicators

You requested practical recommendations for measuring after-hours work, cycle time, quality defects, and burnout indicators. Here is a measurement approach that is realistic and defensible.

After-Hours Work

  • Use system log data from the case management, GRC, and document management platforms to track timestamped activity.
  • Supplement with email and collaboration metadata to measure volume outside standard hours.
  • Report trends by team and workflow, not individuals. This is about operating model health, not surveillance.

Cycle Time

  • Establish “start” and “stop” definitions for each workflow:
    • Investigations: intake date to closure date
    • Due diligence: request date to clearance date
    • Policy updates: drafting starts from the published version
    • Regulatory change: trigger identification to implementation
  • Track AI-assisted versus non-AI-assisted cycle times to isolate the impact.

Quality Defects

  • Define defects as “items requiring material correction after initial completion,” including:
    • Incomplete documentation
    • Wrong risk rating or missing rationale
    • Incorrect regulatory mapping
    • Reopened cases due to insufficient analysis
    • Audit findings tied to workflow execution
  • Capture defects through QA sampling, supervisor review logs, audit results, and post-incident reviews.

Burnout Indicators

  • Run a quarterly pulse survey with 5–7 questions on workload, pace, clarity, and ability to disconnect.
  • Track voluntary attrition and vacancy duration for compliance roles.
  • Include aggregate HR indicators such as overtime trends or sick leave usage, where available.
  • Use a composite score and trend it. The trend line is what matters.

The key is to build instrumentation without creating a culture of monitoring employees. Your goal is not to watch people. Your goal is to protect the control environment.

Adopt an Enterprise AI Practice Standard Now

For an innovation-forward company, the right move is not to slow down. The right move is to govern how you speed up. Your call to action is simple and strong: to adopt an enterprise AI practice standard as management guidance, owned by Compliance, implemented across all GRC workflows, measured by five work-intensification KPIs, and tested by internal audit and red teaming.

If you do that, you gain three things immediately:

1. A sustainable operating model

2. Defensible governance for regulators and boards

3. A compliance function that remains credible under pressure

AI can make compliance better. But only if the humans who run compliance can still breathe.

Categories
Blog

Co-Thinking with AI: A New Frontier for Compliance Problem-Solving

Ed. Note: This week, we present a week-long series on the use of GenAI in a best practices compliance program. Every other day this week, I have created a one-page checklist for each article that you can use in presentations or for easier reference. However, for today’s blog post, I have made a Compliance AI Dialogue Playbook to illustrate the concepts discussed. If you would like a copy, email my EA, Jaja, at jaja@compliancepodcastnetwork.net.

Compliance officers are, at their core, problem-solvers. We wrestle with thorny questions every day: How do we implement a global gifts-and-entertainment policy across jurisdictions with vastly different cultural norms? How do we balance business pressures with anti-corruption obligations? How do we address new risks like AI itself? Traditionally, compliance officers have relied on their teams, external counsel, and regulators for perspective. But now, there is another partner available: AI as a co-thinker.

Elisa Farri and Gabriele Rosani, in their HBR article, How AI Can Help Managers Think Through Problems, argue that generative AI is not simply a productivity booster but a thought partner that can help managers frame problems, weigh trade-offs, and refine decision-making. For compliance professionals, this opens an exciting frontier. Instead of seeing AI as just a summarization or monitoring tool, we can use it to think with us about compliance challenges.

Today, we consider five key takeaways for compliance professionals, each exploring how AI can and should be trusted as a structured co-thinker in corporate compliance problem-solving.

1. AI Can Help Frame Compliance Problems More Clearly

One of the hardest parts of compliance work is problem framing. Regulators do not hand us neat checklists; instead, they give us principles, expectations, and enforcement actions. It’s up to us to translate these into workable policies and controls.

The authors highlight how AI can act as a sounding board, asking clarifying questions, offering perspectives, and reframing issues. In compliance, this is invaluable. For example, when confronting a possible books-and-records violation, you can ask AI to outline the problem from different angles: the DOJ’s perspective, the auditor’s lens, or the business unit’s operational concerns.

This “co-thinking” dialogue helps compliance officers avoid blind spots. By articulating context and criteria while AI proposes reframings or stakeholder perspectives, the problem becomes clearer. Often, clarity is half the solution.

The compliance lesson: Don’t just throw a problem at AI and expect an answer. Use it to refine the question. A well-framed compliance issue is easier to analyze, explain, and ultimately solve.

2. AI Strengthens Root Cause Analysis in Compliance Investigations

Root cause analysis is central to modern compliance. Regulators do not just want misconduct identified; they want to know why it happened and how you’ll prevent it going forward. Yet too often, root cause analysis gets bogged down in assumptions or limited perspectives.

Farri and Rosani cite managers who use AI dialogues to explore underlying causes systematically. For compliance officers, this can be a game-changer. Imagine an investigation into repeated expense-report fraud. AI can walk you through potential cultural drivers (“tone at the top,” sales pressure), structural flaws (weak approval workflows), and training gaps. It can then push back: “Are you overlooking incentives?” or “What if the issue is inadequate third-party vetting?”

By iterating through hypotheses in a structured dialogue, compliance professionals can avoid premature conclusions and dig deeper. This not only strengthens remediation but also demonstrates to regulators that the company engaged in a thorough, multi-perspective analysis.

The compliance lesson: AI co-thinking transforms root cause analysis from a static checklist into a dynamic dialogue, driving richer insights and more defensible conclusions.

3. AI Helps Anticipate Stakeholder Reactions to Compliance Decisions

Compliance isn’t just about rules; it’s about relationships. A compliance policy that looks perfect on paper can fail if stakeholders resist or misunderstand it. That’s why anticipating reactions is essential.

The article describes a communications manager who used AI to role-play stakeholder perspectives. Compliance teams can apply the same method. Suppose you’re rolling out a new third-party due diligence system. You could ask AI to simulate how sales might react (“This slows down deal velocity“), how finance might respond (“We lack resources for added checks“), and how regulators would view the process (“Demonstrates good faith risk management“).

This kind of dialogue allows compliance officers to refine messaging, anticipate objections, and design mitigation strategies before rollout. It’s essentially stakeholder mapping on steroids.

The compliance lesson: Use AI to run “compliance fire drills.” Let it act as different stakeholders, challenge your assumptions, and highlight where communication or process gaps may derail implementation. Better to hear objections from an AI simulation than from the DOJ or your workforce, after the fact.

4. AI Supports Compliance Leadership and Mindset Shifts

Compliance is not static; it evolves as risks and expectations change. One of the hardest parts of leadership is helping teams adopt new mindsets. Whether it’s embedding ESG into compliance or shifting from reactive investigations to proactive risk management, change is as much about people as it is about rules.

The authors point to managers using AI to coach teams through mindset shifts. Compliance officers can replicate this by designing AI dialogues that help teams reflect on change. For example: “Act as a compliance coach guiding a regional manager through adopting a risk-based mindset for third-party approvals.” AI can then walk the manager through scenarios, pose self-assessment questions, and suggest daily practices to internalize the change.

This turns AI into a scalable leadership development tool for compliance. It’s not replacing human mentorship but supplementing it, ensuring employees across geographies get consistent coaching.

The compliance lesson is straightforward: AI can democratize leadership development in compliance. By embedding coaching into AI assistants, compliance leaders can scale mindset change while reinforcing culture across the enterprise.

5. AI Encourages Reflective and Ethical Decision-Making

Finally, compliance is about judgment. Not every decision can be reduced to a policy or rulebook. Whether deciding how to respond to a gray-area hospitality offer or whether to self-disclose a violation, compliance officers must weigh trade-offs.

Farri and Rosani emphasize that AI, when engaged as a co-thinker, can enhance reflective decision-making. It does so by slowing us down, asking probing questions, and challenging quick assumptions. This is especially important because compliance officers are often under pressure to deliver fast answers to complex problems.

By prompting reflections such as “What risks might we be missing? What would regulators expect? What precedent are we setting? AI ensures compliance officers approach decisions with greater ethical clarity. It’s the Socratic method in digital form.

The compliance lesson: AI should not be seen as replacing compliance judgment but as sharpening it. By making space for reflection, AI helps ensure that compliance decisions are thoughtful, principled, and defensible.

From Automation to Co-Thinking

For too long, compliance has viewed AI as a back-office automation tool: summarizing, monitoring, and drafting. Farri and Rosani remind us that AI can do much more: it can think with us.

By helping frame problems, strengthening root cause analysis, anticipating stakeholder reactions, supporting mindset shifts, and fostering reflective decision-making, AI becomes not just a tool but a thought partner. For compliance officers under increasing pressure from regulators and boards, that partnership could be transformative.

The path forward is clear: stop asking “What can AI do for compliance?” and start asking “How can AI help compliance think better?”

Categories
Blog

Building Your Own AI Assistant: Compliance Lessons in Customization

Ed. Note: This week, we present a week-long series on the use of GenAI in a best practices compliance program. Additionally, for each blog post, I have created a one-page checklist for each article that you can use in presentations or for easier reference. Email my EA Jaja at jaja@compliancepodcastnetwork.net for a complimentary copy.

In the ever-changing world of compliance, resource constraints remain one of our biggest hurdles. Whether you’re drafting policies, conducting risk assessments, or preparing investigation summaries, the work is often repetitive, labor-intensive, and subject to tight deadlines. Enter the AI assistant, not as a futuristic dream, but as a practical, buildable tool available to compliance professionals right now.

Alexandra Samuel’s article in Harvard Business Review titled How to Build Your Own AI Assistant, makes one point crystal clear: if you can describe a project in plain English, you can build your own AI assistant. And for compliance professionals, this represents a transformative opportunity to reduce administrative burdens while increasing consistency, accuracy, and adaptability.

But building your compliance AI assistant isn’t about chasing efficiency alone—it’s about making intentional design choices that reinforce compliance objectives, protect corporate culture, and ensure regulatory defensibility. Today, we consider five key takeaways for compliance professionals, each showing how you can harness AI assistants to enhance, not replace, your compliance program.

1. Start with the Right Use Cases

Before building, compliance leaders must ask: What problems do we want AI to solve? Samuel notes that AI assistants excel in four domains: writing and communications, troubleshooting, project management, and strategic coaching. For compliance, this translates into use cases like:

  • Drafting first-pass policy updates aligned with global regulations.
  • Summarizing enforcement actions for Board reporting.
  • Automating responses to routine employee compliance questions (e.g., “Can I accept this client gift?”).
  • Tracking investigation timelines and automatically extracting action items from meeting transcripts.

Choosing the right use case ensures your AI assistant is a force multiplier rather than a shiny distraction. Importantly, you want to start with low-risk, high-volume tasks. Drafting an anti-corruption annual training memo? AI can handle the boilerplate. Deciding whether to disclose a potential FCPA violation to the DOJ? That still belongs squarely in the human domain.

The real lesson here: compliance officers should not let “AI hype” dictate priorities. Instead, define pain points within your compliance workflow and build assistants targeted at those specific, recurring problems. Start small, iterate, and scale responsibly.

2. Design Clear Instructions—Your Assistant Is Only as Good as Its Guidance

According to Samuel, the “heart” of a custom AI assistant is the set of instructions you provide. For compliance teams, this is where risk and opportunity intersect. If your assistant doesn’t know who it is, what standards to apply, and what tone to use, it will produce outputs that undermine your credibility.

Think of instructions as your assistant’s Code of Conduct. Instead of saying “you are a compliance assistant,” you can be more precise:

  • “You are a corporate compliance officer drafting policies for a multinational company. You must ensure all content aligns with DOJ guidance on effective compliance programs, uses a professional but approachable tone, and provides practical examples for employees.”

These custom instructions allow you to “bake in” compliance frameworks from day one. For example, you can require the assistant to reference the COSO Framework for Internal Controls, ISO 37001, or the DOJ’s Evaluation of Corporate Compliance Programs whenever relevant.

The key compliance insight: good AI assistants reflect great compliance design. Just as vague compliance policies create ambiguity, vague AI instructions create unreliable outputs. Invest time in precise persona-building for your assistant, and you’ll reap consistent, defensible results.

3. Feed It Knowledge—Without Losing Control of Sensitive Data

Samuel emphasizes that AI assistants become truly powerful when equipped with background documents, such as policies, reports, contracts, or training decks. For compliance, this is both a gold mine and a minefield.

On one hand, uploading prior investigation reports, risk assessments, or compliance training modules allows your assistant to generate outputs that reflect your company’s real history and regulatory environment. Imagine an assistant that can instantly pull together a cross-border risk assessment using your own prior filings and internal guidance.

On the other hand, compliance officers must stay vigilant about data protection, privilege, and confidentiality. Sensitive HR records, whistleblower reports, and privileged investigation materials should never be indiscriminately fed into a platform without proper safeguards.

Here lies the balancing act: compliance teams must create AI assistants that are well-informed but tightly governed. This may involve anonymizing data, working through secure enterprise-grade AI platforms, or restricting inputs to public and non-sensitive internal documents.

The compliance lesson is simple but non-negotiable: context matters, but confidentiality reigns supreme. Building a compliance AI assistant means establishing protocols for what can and cannot be shared.

4. Iterate Constantly—Think Like a Compliance Monitor

Just as compliance programs require continuous improvement, so too do AI assistants. Samuel makes it clear that assistants won’t be perfect out of the box. They require ongoing feedback, refinement, and adjustment.

For compliance professionals, this is second nature. We already think in terms of monitoring, auditing, and revising. Apply the same discipline to your AI assistant:

  • Audit its outputs for accuracy, tone, and regulatory defensibility.
  • Track where it consistently underperforms (e.g., misinterpreting data privacy rules) and feed corrective instructions.
  • Periodically, “refresh” its context files to reflect updated regulations, new enforcement actions, or changes in corporate policy.

Samuel suggests asking your assistant to write their own revised instructions based on your feedback. That’s a compliance monitoring exercise in itself—your assistant becomes both subject and participant in continuous improvement.

The compliance takeaway: treat your AI assistant as a dynamic system, not a static tool. Just as DOJ expects ongoing risk assessments and remediation, regulators will expect that AI tools in compliance are actively managed, not blindly trusted.

5. Embed Ethical Guardrails and Accountability

The most important compliance lesson in building your own AI assistant is ensuring accountability. As Samuel warns, assistants can hallucinate or produce flawed outputs. In compliance, this is not simply an annoyance; more importantly, it is a potential liability.

That means your assistant must operate under ethical guardrails:

  • Always include a human-in-the-loop review before any AI-generated compliance document is finalized.
  • Require disclosures when AI was used in drafting policies, reports, or training.
  • Train employees not to treat AI outputs as gospel but as drafts for critical evaluation.
  • Align your assistant’s objectives with compliance KPIs, accuracy, transparency, and defensibility, rather than raw speed.

This mirrors the DOJ’s emphasis on corporate accountability. An AI assistant may help draft your gifts and entertainment policy, but it cannot stand before prosecutors and defend your compliance program. That responsibility remains squarely with leadership.

The compliance lesson here is unmistakable: AI is a tool, not a scapegoat. Build it to augment compliance decision-making, not to absolve it.

From Experiment to Integration

Building your own AI assistant is not a technical challenge. It is a compliance design challenge. As Alexandra Samuel reminds us, if you can describe your project, you can build your assistant. For compliance officers, that means thinking intentionally about use cases, precision in instructions, safeguards for sensitive data, iteration, and ethical guardrails.

The opportunity is immense. With thoughtfully designed AI assistants, compliance professionals can shift their focus from repetitive drafting to higher-order strategy, from administrative overload to proactive risk management. But the responsibility is equally immense. An AI assistant reflects the design choices of its creators, choices that must always prioritize compliance culture, accountability, and trust.

Categories
Blog

Recalculating AI: Compliance Lessons in Weighing Costs and Benefits of GenAI

Ed. Note: This week, we present a week-long series on the use of GenAI in a best practices compliance program. Additionally, for each blog post, I have created a one-page checklist for each article that you can use in presentations or for easier reference. Email my EA Jaja at jaja@compliancepodcastnetwork.net for a complimentary copy.

For compliance professionals, the rise of generative AI (GenAI) feels like déjà vu. We’ve been here before—with ERP rollouts, e-discovery software, and data analytics tools. Each new technology comes with the same pitch: faster, smarter, cheaper. And each time, compliance officers are tasked with answering a more difficult question: At what cost?

Mark Mortensen’s recent piece in Harvard Business Review titled Calculating the Costs and Benefits of GenAI, provides a framework for thinking about this balancing act. While AI undeniably creates efficiency, Mortensen cautions that organizations risk losing knowledge, engagement, and trust if they fail to evaluate adoption carefully. For compliance leaders, the implications are profound.

Today, we consider five key takeaways from the article for compliance professionals—each one an area where AI’s promise and peril intersect.

1. Efficiency Gains Must Be Weighed Against Knowledge Loss

One of AI’s greatest selling points is speed. It can review contracts in minutes, summarize regulatory changes instantly, and generate risk assessments that previously took weeks. For perpetually under-resourced compliance departments, this is a tantalizing offer.

Yet here lies the first hidden cost: learning. Mortensen reminds us that the process of struggling with a problem involves the back-and-forth revisions of a policy draft, iterative risk-mapping discussions, and even the time spent combing through dense regulations. This cements knowledge and deepens institutional expertise. If compliance teams begin to outsource too much of that process to AI, the organization risks eroding the very expertise it relies on to interpret nuance.

Consider this: an AI might draft your anti-bribery training materials, but without human engagement in the process, your team loses the chance to sharpen its understanding of new FCPA enforcement trends. Over time, this erodes your compliance program’s intellectual resilience.

The lesson for compliance leaders is clear: use AI to accelerate, not replace, your team’s learning. Make sure staff remain actively engaged in the interpretive process. AI should provide information, not serve as the final arbiter of compliance knowledge.

2. Short-Term Problem Solving Can Inhibit Long-Term Skill Development

“Practice makes perfect” is more than just a proverb; it is a professional truth. Drafting compliance reports builds writing skills, testing control frameworks sharpens analytical ability, and grappling with regulatory ambiguity builds judgment.

But if compliance teams lean too heavily on AI to generate audit memos or to identify anomalies in financial data, they risk undermining their development. Mortensen points out that when we hand tasks to AI, we sacrifice the chance to strengthen the very skills we will need tomorrow.

Consider a scenario where AI consistently handles first drafts of risk assessments. Compliance officers may grow accustomed to editing AI output rather than developing their structured thinking. Over time, the skill gap widens. This leaves organizations dependent on tools that cannot be held accountable when regulators ask tough questions.

From a compliance standpoint, this has a direct connection to sustainability. DOJ guidance emphasizes the need for continuous program improvement and the development of compliance capabilities. A department that loses skills to AI outsourcing may look efficient on paper, but it becomes brittle in practice.

Compliance leaders should strike a balance by reserving certain core tasks, like drafting root cause analyses or preparing investigation reports, for human-led execution, even if AI could technically do them faster. These are the muscle-building exercises of compliance, and like any workout, skipping them leads to long-term weakness.

3. AI Risks Weakening Relationships and Organizational Trust

Compliance does not happen in a vacuum. It thrives or fails based on relationships. Internal trust with business units, credibility with senior leadership, and even informal rapport built during brainstorming sessions all matter.

AI, however, threatens to reduce these interactions. Mortensen notes that the computational power of AI allows individuals to solve problems alone that previously required teams. While efficient, this independence comes at a cost: fewer interpersonal touchpoints, weaker social ties, and ultimately, reduced trust.

For compliance, this risk is especially acute. Much of our effectiveness hinges on being seen as collaborative partners, not bureaucratic enforcers. If AI reduces the frequency of conversations around risk assessments, policy updates, or investigations, compliance officers may lose opportunities to build influence. Worse, an “AI does it all” approach may reinforce perceptions that compliance is transactional rather than relational.

The takeaway here is that AI should never replace human dialogue in compliance. Use it to free up time so compliance officers can spend more energy building relationships with line managers, auditors, and employees, rather than less. The culture of compliance is rooted in trust, and no algorithm can generate that.

4. Engagement and Ownership Can Decline with Over-Automation

Engagement matters. Mortensen defines it as being psychologically present in the work. For compliance professionals, engagement translates into vigilance: spotting red flags, questioning anomalies, and challenging assumptions.

But AI introduces a risk of disengagement. When it summarizes investigation interviews or drafts compliance dashboards, humans can become passive consumers rather than active participants. Over time, “good enough” replaces “deep enough.”

This erosion of ownership is dangerous for compliance. Regulators increasingly expect companies to demonstrate not only robust processes but also genuine cultural buy-in. If compliance staff are disengaged because AI has taken over too many cognitive functions, the program risks becoming a paper tiger, form without substance.

To counter this, compliance leaders should intentionally design workflows where humans must interpret and add value to AI outputs. For example, AI can generate a first-pass risk heat map, but compliance officers should validate and adjust it based on local context and business realities. That layer of judgment keeps engagement alive and maintains a sense of accountability.

Ultimately, compliance is about judgment, not just information. AI can support but never substitute for human ownership of ethical decision-making.

5. Homogenization Threatens Compliance Program Uniqueness

Every compliance program reflects its company’s unique culture, risks, and leadership voice. Mortensen warns that because large language models are convergent technologies, they produce standardized answers. Leaders who rely on AI for memos, presentations, or policies risk erasing their distinctive tone and voice.

For compliance professionals, this risk translates into a loss of authenticity. Regulators, employees, and stakeholders can quickly tell the difference between a policy that reflects real company values and one that reads like a generic AI template. Over time, over-reliance on AI can strip a compliance program of its personality and with it, credibility.

The danger goes deeper. If multiple companies rely on AI to draft similar codes of conduct, policies may look indistinguishable. That creates industry-wide convergence at a time when regulators are looking for tailored programs that reflect specific risks. In effect, AI could make compliance programs less defensible, not more.

The path forward is to use AI as a scaffolding tool, not as a finished product. Compliance officers should inject their organization’s unique voice, industry-specific risks, and leadership tone into every AI-assisted document. Authenticity is non-negotiable in compliance. AI can never be allowed to flatten it.

AI Audits for Compliance Leaders

Mortensen’s framework for an “AI value audit” is particularly relevant for compliance. He suggests three steps: (1) determine the types of value a task creates, (2) prioritize and optimize them, and (3) continually reassess with a “milk test” to ensure the value hasn’t expired.

For compliance, this means asking: Does AI enhance our program without undermining knowledge, skills, trust, engagement, or authenticity? If not, the short-term benefits may not be worth the long-term costs.

AI is here to stay, and compliance officers must learn to harness it. But like every tool before it, AI is not a replacement for judgment, culture, and leadership. It is an assistant, not the evangelist for compliance.

Categories
Blog

How Generative AI is Transforming Business and Compliance in 2025

One thing I have learned from the digital age is that to stay ahead, we must stay informed and proactive about how new technologies impact corporate governance, ethics, and operational compliance. In this context, generative AI (Gen AI) is no longer a futuristic concept; it is embedded deeply in our everyday activities. Marc Zao-Sanders’ article in Harvard Business Review (HBR), “How People Are Really Using Gen AI in 2025,” presents an excellent opportunity to reflect on how these developments impact compliance, governance, and risk management.

Zao-Sanders highlights a critical shift in how generative AI is utilized: from purely technical assistance towards significantly more personal and emotive applications. With “Therapy/Companionship,” “Organizing my life,” and “Finding purpose” emerging as the top three use cases, it’s clear that users seek emotional and organizational support, demonstrating Gen AI’s versatility beyond traditional technological roles.

Compliance professionals must recognize that as AI increasingly becomes integral to both professional services and personal well-being, the accompanying risk and compliance implications magnify exponentially. The nature of these interactions, often intimate or deeply personal, demands robust data privacy protections and stringent ethical governance frameworks. Businesses integrating these technologies need precise, transparent policies and effective oversight mechanisms to mitigate new compliance risks.

Implications for Compliance Professionals

Enhanced Data Privacy and Ethical Considerations

Zao-Sanders emphasizes the rising prominence of personal and professional support through Gen AI, especially in areas such as AI-based therapy, emotional companionship, and life organization. As users entrust AI with highly sensitive personal data, compliance professionals face increased responsibilities regarding data privacy, security, and the ethical use of data. This scenario elevates the stakes considerably. He notes, “data safety is not a concern when your health is deteriorating,” highlighting users’ willingness to sacrifice privacy for crucial emotional or medical support. Such conditions can quickly lead to ethical and compliance vulnerabilities if businesses fail to manage and protect sensitive user data rigorously.

Organizations must reinforce their compliance strategies to manage ethical risks inherent in AI-human interactions. As Zao-Sanders indicates, professional services, including medical, legal, and financial advisement, are increasingly relying on generative AI, pushing regulatory boundaries. Notably, EY’s deployment of 150 AI agents specifically for tax-related tasks highlights the profound impact of generative AI on professional services, adding layers of complexity to compliance strategies.

Regulatory Response and Enforcement Trends

The article briefly touches on the growing regulatory scrutiny that Gen AI is attracting globally, noting explicitly that governments are “taking more emphatic and explicit positions” due to heightened stakes surrounding AI technology. For compliance professionals, this should serve as a clarion call: regulatory oversight is intensifying. Preparing for audits, demonstrating compliance, and actively engaging with regulatory developments will be essential. The rapid pace of AI adoption necessitates an agile and proactive approach to compliance management that anticipates, rather than merely reacts to, regulatory shifts.

Balancing AI Dependence with Human Oversight

A striking tension highlighted in the article is the debate over the impact of generative AI on human cognitive abilities, decision-making, and ethical judgment. Users express genuine concern about becoming overly reliant on AI, which could erode their ability to think critically and make independent, ethical decisions.

This reliance poses significant implications for compliance officers charged with safeguarding ethical decision-making. Effective compliance programs must emphasize human oversight, cultivating a culture where AI supports rather than supplants human judgment. Investing in AI literacy among employees can mitigate potential over-reliance, fostering an environment where staff understand both the capabilities and limitations of AI.

Compliance in AI-Driven Professional Services

Zao-Sanders illustrates how AI integration into professional tasks is increasingly sophisticated. For instance, the transformation underway at EY, training employees extensively in generative AI, reflects broader industry trends. Compliance officers must respond to these developments by establishing clear standards and compliance checkpoints. It is crucial to determine whether AI outputs meet professional standards, remain unbiased, and do not inadvertently violate regulatory obligations.

Given AI’s pervasive integration into professional judgments (such as tax preparation, legal advice, and medical diagnosis), the accuracy and regulatory compliance of AI-driven outputs become paramount. Compliance programs must integrate AI auditability, accountability, and transparency deeply into corporate governance frameworks.

Practical Compliance Steps in the Gen AI Era

1. Proactive Policy Development and Training

Develop clear policies that outline the acceptable use of generative AI, including specific guidelines on data handling, ethical considerations, and regulatory obligations. Embed these policies into your organization’s culture through rigorous training and communication strategies.

2. Rigorous Risk Assessment and Ongoing Monitoring

Gen AI compliance must adopt continuous monitoring. Regular risk assessments and periodic audits of AI systems will promptly detect and rectify issues. Compliance officers should remain actively involved in assessing new AI technologies for ethical, privacy, and regulatory considerations before full-scale implementation.

3. Transparent Data Practices

Given the heightened public sensitivity to data privacy concerns, as noted by Zao-Sanders’ mention of users’ concerns around data privacy and their cynicism toward Big Tech, companies must prioritize transparent data practices. Clear communication about data usage, consent, and protection measures will foster trust and reduce compliance risks.

4. Ethical AI Governance Frameworks

Design and deploy ethical AI governance frameworks that address algorithmic fairness, transparency, and accountability, ensuring responsible use of AI. These frameworks ensure generative AI tools are deployed responsibly and ethically, aligning with stakeholder expectations and regulatory standards.

5. Encourage Human-AI Collaboration

Foster a balanced approach between AI-driven solutions and human judgment. Reinforce the importance of human oversight to ensure compliance, accuracy, and ethical decision-making, thus minimizing over-dependence on AI.

Looking Ahead—The Compliance Imperative in the Gen AI Landscape

As we approach a future increasingly defined by AI integration, compliance professionals have a unique opportunity to lead their organizations proactively. Understanding and managing the compliance and ethical dimensions of Gen AI is now critical, not optional. The risks and opportunities outlined in Zao-Sanders’ article underscore the urgent need for a strategic, well-informed approach to integrating generative AI into corporate compliance frameworks.

Compliance professionals should view this moment as an opportunity to demonstrate thought leadership, to guide ethical AI adoption, and to establish robust frameworks that enable businesses to thrive responsibly. By proactively addressing the compliance and moral challenges presented by generative AI, we not only fulfill our professional obligations but also position our organizations as ethical, forward-thinking leaders in the digital age. The compliance journey ahead is demanding, but equally, it offers profound opportunities to influence and shape a responsible, compliant, and ethically robust AI-driven future.

Categories
Blog

Tariff Week, Part 1 – Navigating Uncertainty: The Compliance Professional’s Guide to Trump’s Tariffs

This week, we will examine the macroeconomic implications of President Trump’s recent tariff hikes and suspensions, a critical issue reverberating across boardrooms globally. Business leaders and compliance professionals are grappling with navigating this unprecedented landscape, and understanding the nuances of this evolving situation is crucial for corporate strategy and compliance preparedness. Today, we will take a macroeconomic view.

Last week, President Trump dramatically escalated tariffs on U.S. trading partners, elevating the average effective tariff rate to approximately 23%. This sharp increase has left markets reeling and businesses scrambling to adapt. Just as quickly (within 48 hours), he brought the tariffs back to their original amount by suspending them. This situation illustrates the growing complexity and volatility that executives must manage, highlighting the vital role that corporate compliance teams play in preparing businesses for macroeconomic shocks.

I was therefore interested in a recent Harvard Business Review article entitled Understanding the Global Macroeconomic Impacts of Trump’s Tariffs by authors Philipp Carlsson-Szlezak, Paul Swartz, and Martin Reeves. In this article, they considered how Trump’s tariff imposition and roll-back moves “have jolted markets and thrust business leaders into deep uncertainty. Developing a better understanding of tariffs’ primary and secondary macroeconomic effects and any plausible long-term consequences will allow executives to assess the impact on their markets and businesses continuously. With so much in flux, leaders must ditch rigid plans and build flexible, analytical muscle to navigate this turbulent new landscape.”

At its core, this situation underscores the asymmetrical nature of trade wars. The United States, due to its significant trade deficit, initially seemed well-positioned to engage in targeted trade disputes. However, by initiating a comprehensive, 360-degree trade war affecting virtually all global trading partners simultaneously, the U.S. has dramatically altered the landscape of risk and opportunity. This asymmetry is critical; while the U.S. experiences cumulative impacts from numerous trade disputes, its trading partners face singular impacts from the U.S. alone.

Understanding the primary effects of tariffs requires compliance professionals to differentiate clearly between supply and demand shocks. For U.S. businesses, supply shocks are particularly pertinent. Tariffs, effectively taxes on imports, invariably translate into higher consumer prices, fueling inflation. This scenario is reminiscent of the post-pandemic supply chain disruptions we have navigated, curtailing real incomes and restraining economic growth. Analysts predict these new tariffs could slash U.S. GDP growth by approximately 1.4%, significantly impacting corporate forecasts and strategic planning.

Trade partners face their own challenges. Retaliatory tariffs, already implemented by China and under consideration by others, inflict similar inflationary pressures and consumption downturns, albeit typically on a smaller scale, estimated between a 0.1% to 0.3% GDP reduction. However, demand shocks to these trading partners could be more severe, depending on the price sensitivity of U.S. imports. Countries heavily dependent on the U.S. market, such as Vietnam, might witness GDP contractions exceeding 6%, illustrating the profound impact that tariff-induced demand disruptions can have on certain economies.

Compliance teams must also monitor and prepare for secondary impacts. The five critical secondary channels to watch are confidence erosion, ROI effects, monetary policy errors, diminished competitiveness, and potential new financial and other shocks. Decreased consumer and business confidence could dampen spending, hiring, and investment behaviors. Additionally, while historically not always leading to recession, equity market volatility poses tangible threats to corporate balance sheets and overall financial stability.

Moreover, the tariffs significantly affect competitiveness. Approximately half of U.S. imports consist of production inputs essential for domestic manufacturing, such as steel and machine tools. Increased production costs stemming from tariffs could, therefore, undermine U.S. businesses’ competitive positions globally, an area where compliance teams must remain vigilant and advise on risk mitigation strategies.

The long-term impacts of these tariffs also warrant consideration. The Trump administration aims to reallocate global production to bolster U.S. manufacturing and employment. Unlike the Biden administration’s CHIPS Act, which strategically incentivized high-productivity sectors like semiconductors, the broad scope of Trump’s tariffs risks fostering lower-productivity industries domestically. This shift could crowd out higher-value sectors due to competition for already scarce labor resources, diminishing overall economic productivity and potential.

This scenario demands that compliance professionals embrace continuous learning and adaptability. The volatility and complexity introduced by the tariff situation reinforce the necessity of dynamic analytical capabilities over static compliance strategies. Compliance leaders must ensure their organizations develop robust analytical frameworks to assess and respond continuously to evolving macroeconomic conditions.

Organizations must regularly revisit their risk assumptions, factoring in the potential global reshuffling of trade flows. If major exporters redirect goods previously destined for the U.S. to other markets, it could trigger a broader global trade conflict, requiring compliance officers to adjust corporate risk assessments and response strategies rapidly.

Finally, executives and compliance professionals should approach this situation with a dual lens, balancing tactical short-term responses with strategic long-term considerations. Immediate tactical decisions are necessary, but it is equally critical to analyze potential structural changes in global trade dynamics that may unfold over the coming decade.

Managing macroeconomic uncertainty, such as the ongoing 360-degree trade war, is increasingly becoming an essential competency for compliance professionals. Those who proactively develop sophisticated, agile analytical capabilities will be better equipped to navigate these uncertain waters, providing their organizations with strategic advantage in tumultuous economic conditions.

Categories
Blog

The Compliance Frontier the AI Era, Part 1 – Navigating Strategy in the AI Era

Compliance is early in the AI era, and the technology is quickly evolving. Many service providers are introducing AI “copilots,” “bots,” and “assistants” into applications to augment compliance workflows. These compliance tools have been trained on various data sources and possess expansive expertise in many domains. The level of knowledge in these tools is still growing rapidly while the cost of accessing them is decreasing. In an article in the Harvard Business Review (HBR), authors Bobby Yerramilli-Rao, John Corwin, Yang Li, and Karim R. Lakhani posit that shortly, there will be “more advanced “AI agents” equipped with greater capability and broader expertise that will be operating on behalf of users with their permission. Companies that benefit from AI can conduct business more efficiently, innovate more nimbly, and grow with sharpened vision and focus.”

Their article, “Strategy in an Era of Abundant Expertise,” provides crucial insights into how artificial intelligence (AI) transforms the competitive landscape by reshaping how businesses leverage expertise. The authors argue convincingly that we have entered an era defined by two compelling forces: the exponentially increasing volume of knowledge and the dramatically reduced cost of accessing it. Today, we begin a two-part exploration of their article and how their insights apply to compliance. In Part 1, we consider how this transformation in expertise accessibility is fundamentally altering business strategies and operational models. Tomorrow, in Part 2, we will consider their article’s lessons for the compliance profession.

The Transformation of Expertise

At its core, expertise is the deep theoretical knowledge and practical know-how necessary to perform specific tasks effectively. Historically, businesses succeeded by developing unique expertise that differentiated them from competitors. Examples such as Toyota’s mastery of lean manufacturing and Walmart’s superior distribution capability illustrate how critical specialized knowledge has been to corporate dominance.

However, AI is now dramatically changing this traditional paradigm. Today, specialized expertise, once costly and confined within the walls of large organizations, is becoming broadly available at much lower costs. AI-powered tools are emerging as pivotal “copilots,” augmenting human capabilities across numerous business functions. This shift means companies no longer need extensive internal expertise in all areas but can strategically access external AI-powered resources to fill gaps and streamline operations.

The Dual Forces of AI

The authors pinpoint two fundamental forces driving the AI-era transformation: (1) the continuous expansion of global expertise and (2) the decreasing cost of access. These intertwined forces have a profound influence on corporate strategy and organizational structure.

The expanding body of global expertise means businesses now face the impossible task of staying ahead in all relevant knowledge domains. For example, the article highlights biotech firms, where AI applications for drug discovery have surged astronomically, making it impossible for any firm to master all available knowledge independently. Simultaneously, the cost of accessing this ever-growing expertise is plummeting, lowering barriers to market entry and significantly changing competitive dynamics.

Companies such as Instagram and TikTok illustrate this trend vividly. They provide content creators with advanced tools formerly reserved for industry professionals, leveling the playing field and democratizing expertise.

Strategic Implications of AI Adoption

The authors argue convincingly that businesses leveraging AI effectively will see a “triple product” return characterized by more efficient operations, increased workforce productivity, and sharper strategic focus. Specifically, AI enables companies to refine their focus on core strategic activities, using AI-driven solutions to manage non-core functions efficiently.

A notable example is Moderna, which employed AI to create more than 900 specialized internal assistants, dramatically improving the speed and accuracy of business processes across its operations. Such integration of AI significantly raises organizational productivity and effectiveness by automating routine tasks and freeing human expertise for more complex strategic considerations.

Reallocating Resources and Refining Focus

A critical benefit of AI highlighted in the article is resource reallocation toward activities that generate maximum value. Companies can now clearly identify core processes where they excel and leverage AI-powered platforms for support activities. The startup FocusFuel, a manufacturer of caffeinated gummies, effectively demonstrates this approach. By strategically outsourcing non-core activities such as market analysis, packaging design, and logistics to AI-enabled platforms, FocusFuel rapidly established itself, achieving significant revenue growth within months of launch.

This trend signifies a paradigm shift in business operations. Organizations increasingly realize that sustaining competitive advantage means intensifying their efforts in select, strategically valuable areas rather than attempting to excel broadly. This approach enables businesses to achieve greater agility, efficiency, and responsiveness in rapidly evolving markets.

Organizational Change and Cultural Adaptation

The authors emphasize that successfully adopting AI is not merely a technological upgrade; it requires significant organizational and cultural change. Companies must prepare their employees to operate effectively alongside AI tools, embedding AI expertise into everyday processes. This preparation involves substantial investments in training and education, exemplified by Moderna’s successful establishment of an “AI academy,” offering mandatory AI education to all employees.

Furthermore, managing organizational change requires a proactive approach to cultivating internal AI champions who can accelerate adoption and encourage widespread acceptance. Coursera is a leading example, swiftly integrating AI capabilities into multiple operational facets after initially embracing AI for coding tasks. This rapid adaptation showcases the profound impact of investing in technology and human capabilities.

Future-Proofing Strategic Advantages

Companies must continually reassess their strategic foundations as AI continues its rapid advancement. Three critical questions outlined by the authors guide strategic reevaluation:

  1. What UX problems will AI soon allow the users to solve independently? As AI increasingly empowers customers directly, businesses must rethink their value propositions and reinvent user (customer/employee/supplier) interactions.
  2. What existing expertise must companies evolve to remain ahead of advancing AI capabilities? As AI matches or surpasses human capabilities in numerous tasks, companies must strengthen inherently human competencies such as empathy, creativity, and strategic judgment to differentiate themselves effectively.
  3. What strategic assets can companies leverage to maintain competitive advantages against advancing AI? Businesses must identify durable sources of advantage less susceptible to AI disruption, such as strong brand identities, deep customer relationships, proprietary physical assets, or potent network effects.

These questions illustrate the strategic depth required to successfully navigate the evolving AI landscape. They underline that the future will reward companies leveraging unique human capabilities and durable competitive advantages alongside AI expertise.

Embracing the AI-Driven Future

Ultimately, the article provides an incisive and timely exploration of the strategic implications of AI’s ascendancy. Companies facing today’s competitive realities must recognize AI’s transformative power and strategically integrate it into their operational and competitive frameworks.

For compliance professionals, whose effectiveness increasingly depends on understanding broader strategic developments, grasping these AI-driven shifts is vital. The emerging landscape characterized by abundant and accessible expertise demands a strategic response that embraces the combined strengths of AI and uniquely human insights.

As businesses move forward in this transformative era, the organizations that adeptly balance AI-driven operational efficiencies with strategic differentiation will undoubtedly emerge as leaders in their respective markets. The insights provided by the authors serve as a compelling call to action for all professionals, compliance included, highlighting the strategic imperative of integrating AI effectively to thrive in the rapidly evolving future of business.

Categories
Blog

Building Trust in AI with Blockchain: A Compliance Perspective

Artificial Intelligence (AI) has rapidly become a key driver of business decision-making across industries, from financial services to healthcare. Yet, despite its enormous potential, AI remains a “black box” that raises serious concerns about transparency, accountability, and fairness. According to Pew Research, 52% of Americans are more concerned than excited about AI, while only 10% express enthusiasm. This trust deficit presents a critical challenge for compliance professionals: how can organizations demonstrate responsible AI use and ensure compliance with evolving regulatory expectations?

I was therefore intrigued to read a recent article in the Harvard Business Review by Scott Zoldi and Jordan T. Levine entitled, Using Blockchain to Build Customer Trust in AI. Their response to this quandary was to look at FICO, a leader in financial analysis and ratings, which developed a private blockchain that automated documentation and standards in model development. FICO’s approach leaned directly into a series of strategies used by compliance professionals.

The Compliance Challenge of AI

AI’s ability to analyze vast amounts of data and generate predictions is its greatest strength and its most significant liability. Machine learning models can reinforce biases, lack interpretability, and operate without clear accountability. Compliance professionals must address these challenges head-on by ensuring that AI models are:

  • Interpretable: Customers and regulators need to understand how AI models make decisions.
  • Auditable: Organizations must maintain detailed records of AI development and deployment.
  • Enforceable: Compliance teams need mechanisms to ensure adherence to ethical AI standards.

Without these three pillars, AI risks becoming a compliance nightmare that could lead to regulatory penalties, reputational damage, and loss of customer trust.

Blockchain ensures that AI models are developed following internal guidelines and regulatory requirements. Every modification to the model, from data selection to algorithmic tuning, is permanently recorded, making it easier for compliance officers to track decisions and pinpoint the cause of any discrepancies. This immutable nature benefits industries with strict regulations, such as finance and healthcare, where audits and regulatory reviews are routine.

Additionally, blockchain helps prevent unauthorized alterations by requiring cryptographic verification before changes are accepted into the system. Any attempt to introduce bias, manipulate datasets, or adjust algorithms must be documented and approved transparently. This enhances accountability and strengthens organizational trust in AI.

Blockchain’s integration into AI governance fosters cross-functional collaboration between compliance, legal, and data science teams. Using a single, tamper-proof source of truth, organizations can streamline communication and ensure that AI-related decisions align with corporate policies and industry standards. This collaborative approach mitigates risks and reduces inefficiencies, allowing businesses to innovate responsibly while maintaining regulatory compliance.

For compliance professionals, blockchain provides an operational framework supporting continuous AI model monitoring and improvement. It facilitates real-time oversight, allowing organizations to identify potential compliance risks before they escalate into regulatory violations or reputational damage. As AI technology evolves, blockchain’s role in governance will likely expand, offering even greater opportunities for secure, transparent, and ethical AI development.

Blockchain: A Path to AI Accountability

Blockchain technology offers a potential solution by providing an immutable, transparent record of AI model development and decision-making. The authors reviewed FICO’s adoption of blockchain. They learned, “Making this system work was less a tech challenge than a people one. They learned it was important to start with standards, then develop the tech; that making the system user-friendly was non-negotiable; that it was essential to iterate on quick wins; that they had to build repositories to hold large AI assets in alternate storage; and that they needed capable IT teams to handle the maintenance demands of this system.”

By moving from traditional documentation methods (such as Word documents) to a private blockchain, FICO:

  • Reduced model support issues and recalls by over 90%.
  • Created a single source of truth for AI model development.
  • Ensured absolute adherence to AI governance standards.

Blockchain’s ability to create an auditable trail of every change, test, and decision made during AI model development provides a powerful compliance tool. Unlike conventional documentation, blockchain prevents unauthorized changes and ensures compliance teams can verify AI decisions long after they are made.

Beyond compliance, blockchain enhances the efficiency of AI governance by automating tracking mechanisms that reduce administrative burdens. Traditionally, managing AI development required extensive oversight, documentation, and verification processes, often prone to human error or oversight. By leveraging blockchain, organizations can automate this oversight, ensuring that model updates, training datasets, and algorithmic adjustments are securely recorded in a tamper-proof ledger. This improves compliance and accelerates AI innovation by reducing bottlenecks in model validation.

Additionally, blockchain’s transparency enables better cross-functional collaboration between compliance officers, data scientists, and IT security teams. Instead of relying on disparate documentation and periodic audits, stakeholders can access a real-time, immutable ledger of AI development activities. This fosters greater accountability and ensures that AI models align with ethical guidelines, regulatory requirements, and corporate governance policies from inception to deployment.

Blockchain can mitigate risks associated with AI bias and ethical concerns by providing a structured framework for tracking model modifications and testing processes. Any deviation from approved methodologies is recorded, allowing organizations to detect and address potential issues before they impact decision-making. This proactive approach strengthens AI reliability and fosters trust among regulators, customers, and stakeholders who demand greater transparency in automated decision-making processes.

By integrating blockchain into AI governance, organizations gain a robust compliance tool that ensures models are developed responsibly, deployed ethically, and maintained transparently. As regulatory scrutiny around AI continues to grow, adopting blockchain-based governance is not just an operational advantage; it can provide both a strategy and mechanism for maintaining trust and regulatory compliance in the evolving AI landscape.

Key Compliance Lessons from FICO’s Blockchain Approach

1. Standards Must Come First

Before implementing blockchain, organizations must establish clear AI development standards. This includes defining acceptable algorithms, ethical testing methodologies, and regulatory compliance requirements. Without these guardrails, blockchain is just another technology without purpose.

2. User Adoption Requires a Seamless Experience

One of the biggest hurdles in AI governance is ensuring that data scientists comply with established processes. At FICO, blockchain-based AI governance became non-negotiable—developers could not release models without following the blockchain-tracked workflow. Making compliance seamless rather than burdensome is key to adoption.

3. AI Governance Must Be Iterative

FICO’s blockchain approach evolved, starting with small proofs of concept before scaling across its AI development teams. Compliance professionals should take a similar approach, testing blockchain governance in high-risk areas before expanding its use across the organization.

4. Immutable Records Are Key for Regulatory Defense

Regulators are increasingly scrutinizing AI-driven decisions, especially in highly regulated industries such as finance and healthcare. An immutable AI development, testing, and deployment record provides a powerful defense against regulatory inquiries. It also enables organizations to demonstrate compliance rather than scrambling to justify decisions afterward proactively.

5. Blockchain Is a Tool, Not a Silver Bullet

While blockchain enhances AI governance, it is not a substitute for a strong compliance program. Organizations must still conduct rigorous ethical testing, monitor AI performance, and engage with regulators to ensure ongoing compliance. Blockchain should be viewed as an enabler of trust, not a cure-all.

Final Thoughts: The Future of Compliance in AI Governance

As AI becomes more embedded in business operations, compliance professionals must evolve their oversight strategies to keep pace. Blockchain offers a compelling approach to ensuring AI accountability, but it requires careful implementation, clear governance standards, and buy-in from business leaders.

FICO’s success demonstrates that trust follows when AI governance is built on transparency, auditability, and enforceability. Compliance professionals who embrace blockchain’s potential can help bridge the trust gap in AI, ensuring that these powerful technologies are used responsibly, ethically, and in full compliance with regulatory expectations.

For compliance teams, the question is no longer whether AI governance needs to evolve but how quickly organizations can implement solutions that keep AI accountable. Blockchain is one step in the right direction.

Categories
Blog

The Compliance Sabbatical

The world of corporate compliance is demanding. It requires constant vigilance, deep ethical reasoning, and navigating ever-evolving regulatory landscapes. Compliance professionals are often the last defense against misconduct, ensuring companies adhere to laws and ethical standards. But with great responsibility comes great stress, and burnout is an all-too-common reality in our field. I was intrigued when I came across a recent article in the Havard Business Review by DJ DiDonna, entitled The Case for Sabbaticals — and How to Take a Successful One.

A sabbatical, defined by DiDonna as an intentionally extended leave from your job-related work, may seem out of reach for many workers. But if you can swing it, the potential payoff is enormous. Taking one could be transformational for your life and career. Research and interviews with more than 250 sabbatical-takers reveal the key attributes that define these breaks, the three distinct sabbatical types, and the hurdles one must overcome to persuade bosses, colleagues, and yourself that it is a good idea. DiDonna makes a compelling argument that stepping away from work for a meaningful period is not simply beneficial; it can be transformative. A sabbatical can be essential for maintaining long-term effectiveness and well-being for compliance professionals who operate under high-pressure conditions.

The Compliance Burnout

Compliance officers work in an environment of constant scrutiny. The stakes are high, and the margin for error is razor-thin. Between managing regulatory risks, conducting investigations, and ensuring ethical corporate behavior, the stress can take a cumulative toll. Research shows that burnout leads to reduced effectiveness, poor decision-making, and even ethical lapses, precisely what compliance professionals are hired to prevent. A sabbatical offers a structured way to step back before burnout reaches critical levels. It allows professionals to reset mentally and physically, returning to work with renewed energy and sharper focus.

Benefits of a Sabbatical

1. Reconnecting with Purpose

One of the most significant benefits of a sabbatical is reassessing professional and personal priorities. Many compliance professionals enter the field driven by a strong ethical compass and a desire to make a difference. However, the daily grind, dealing with corporate bureaucracy, managing regulatory challenges, and sometimes confronting internal resistance can wear down that initial sense of purpose.

A sabbatical provides space to reflect on career goals and reconnect with the motivations that drew one to compliance in the first place. DiDonna’s research highlights that many sabbatical-takers return with a clearer sense of direction, often making strategic career shifts or doubling down on their professional mission.

2. Enhancing Strategic Thinking

Regulatory compliance is a dynamic field. Laws change, enforcement priorities shift, and new risks emerge. Staying ahead requires strategic thinking and adaptability. Yet, when professionals are caught up in the day-to-day pressures of compliance, it can be not easy to see the bigger picture.

A sabbatical can foster deep thinking and learning that compliance professionals rarely have time for. Whether through travel, study, or personal projects, time away from routine responsibilities can lead to fresh insights that improve compliance strategy and risk management upon return.

3. Cultivating Resilience and Creativity

Innovation isn’t a word often associated with compliance, but the best compliance programs thrive on creative problem solving. How do you foster a speak-up culture? How do you implement effective training that resonates with employees? How do you navigate gray areas where the law is ambiguous?

Time away from work stimulates creativity, especially when spent in new environments or pursuing new experiences. Compliance officers who take sabbaticals often return with novel approaches to training, policy implementation, and risk assessment.

Practical Steps to Make a Sabbatical Work

Despite the benefits, many compliance professionals hesitate to take a sabbatical. They worry about job security, financial implications, and how their absence might impact their organization. However, with careful planning, a sabbatical is more feasible than most professionals realize.

  1. Plan Ahead: A sabbatical does not have to mean quitting your job. Many organizations offer formal sabbatical programs, even those that do not may accommodate unpaid leave for valued employees. The key is to plan early and present a business case for how your time away will ultimately benefit the organization.
  2. Set Clear Boundaries: A true sabbatical means fully disconnecting from work. That means no checking emails or staying involved in projects remotely. The point is to create distance, both physically and mentally.
  3. Structure Your Time: A sabbatical should be intentional, whether traveling, volunteering, studying, or simply spending time with family. The goal is not simply to take time off but to recharge through engaging in experiences that provide renewal and perspective.

A Strategic Investment in Longevity

Corporate compliance isn’t a sprint; it’s a marathon. To be effective over the long haul, professionals need to pace themselves. Taking a sabbatical is not a luxury; instead, it is an investment in the longevity of individuals and the organizations they serve. Companies benefit when their compliance teams are engaged, refreshed, and thinking strategically.

If compliance professionals want to avoid burnout, enhance their strategic thinking, and return to work with renewed purpose, they should seriously consider taking a sabbatical. The research is clear: stepping away can make all the difference, even temporarily.

Categories
Blog

AI, Process Management, and Compliance

Integrating artificial intelligence (AI) and advanced analytics with robust process management principles can unlock new levels of efficiency and innovation. Mars Wrigley, the global confectionery leader, offers an instructive case study. In an article in the Harvard Business Review entitled, How to Marry Process Management and AI Thomas H. Davenport and Thomas C. Redman wrote that through its strategic deployment of AI to digitize its supply chain and manage operations, Mars Wrigley demonstrates how a systematic approach to process management can achieve significant improvements in operational performance, customer satisfaction, and sustainability.

Mars Wrigley’s success story holds valuable lessons for compliance professionals about aligning technology, data, and governance to enhance compliance frameworks and drive value across organizations.

Digitization and AI: The New Frontier for Process Management

Mars Wrigley began its journey by building a digital twin of its production line and feeding real-time operational data into machine-learning models. The results were striking. The company received predictive insights that reduced overfilling, minimized waste, and optimized supply chain processes. They partnered with vendors like Aera Technology for data visualization and preventive maintenance and with Kinaxis to balance supply and demand, automate invoices, and increase truck utilization by 15%.

This underscores a critical point from a compliance standpoint: Technology can only enhance compliance when processes are well-defined, integrated, and aligned with organizational goals. Compliance officers must recognize the potential of AI to streamline compliance monitoring, enhance risk detection, and reduce manual inefficiencies.

For example, consider AI tools that monitor high-risk transactions or flag anomalies in employee expense reports. When implemented in a robust compliance framework, these tools improve detection rates and allow compliance teams to focus on strategic initiatives rather than routine checks.

The Role of Process Management in Compliance

Process management is about understanding how tasks fit together to create a specific outcome and then optimizing those sequences. Put another way, it is about operationalizing compliance. Whether addressing department-level activities or end-to-end processes, process management principles can yield transformative results when applied to compliance. What are some of the ways process management can do so?

In areas as basic as error reduction, well-managed processes minimize compliance failures by reducing error rates and increasing consistency. A traditional compliance department area is cross-functional coordination with other corporate departments. Effective compliance requires breaking down silos, whether between legal, finance, HR, or operations, and aligning departments toward common objectives.

This approach can also positively impact corporate culture by increasing stakeholder buy-in and employee engagement. Process management often conflicts with hierarchical management structures. In compliance, this tension may manifest when reconciling DOJ mandates with operational priorities in your organization. Persuading stakeholders to prioritize compliance demands strong leadership and effective change management.

AI and Process Management: A Compliance Blueprint

AI supports specific subprocesses within larger workflows, but true transformation occurs when organizations integrate these capabilities across end-to-end processes. For compliance professionals, this is a roadmap for embedding AI into compliance programs.

Step 1: Establish Ownership

Every effective compliance initiative begins with clear accountability. A defined ownership structure underpinned Mars Wrigley’s digital twin success. Compliance programs require similar clarity. Appointing a “compliance process owner” ensures cross-functional alignment, while department-level compliance champions can coordinate implementation.

Step 2: Map and Redesign Processes

Mapping current compliance processes is essential for identifying inefficiencies. Process mining tools, which analyze enterprise system logs to identify bottlenecks, can uncover hidden risks. For instance, tracking the due diligence lifecycle in third-party onboarding can reveal inefficiencies, such as delays in background checks or missed follow-ups.

Redesign efforts should prioritize risk-prone areas, leveraging AI tools to streamline activities like transaction monitoring, policy distribution, and whistleblower case tracking.

Step 3: Define Metrics and Set Targets

Compliance performance must be measurable. Metrics such as incident resolution times, training completion rates, and risk assessment quality should guide process improvements. AI enables real-time metrics monitoring, providing insights that compliance officers can act on immediately. Mars Wrigley’s use of analytics to improve truck utilization offers a parallel for compliance: by tracking resource allocation, compliance teams can reduce unnecessary costs while ensuring optimal coverage of risk areas.

Step 4: Leverage Technology and Data

AI tools such as robotic process automation (RPA) and natural language processing (NLP) are increasingly used in compliance programs to automate routine tasks. RPA can streamline repetitive activities like generating regulatory reports. NLP can analyze large volumes of text, such as contracts or policies, to identify risks or inconsistencies.

Compliance professionals must also advocate for standardized data practices. As Mars Wrigley’s case illustrates, data silos impede process efficiency. In compliance, inconsistent data can obscure risks, making standardized data governance a cornerstone of effective compliance.

Step 5: Foster a Culture of Continuous Improvement

AI and process management are not “set it-and-forget it” solutions. As Mars Wrigley demonstrated, continuous monitoring and iterative improvements are critical for sustaining gains. This means regularly reviewing and updating AI tools for compliance professionals to address emerging risks and regulatory changes.

Lessons for Compliance Professionals

Mars Wrigley’s journey highlights several key takeaways for compliance leaders:

  1. Invest in AI Thoughtfully. Technology is not a silver bullet. Its effectiveness depends on how well it integrates with and supports compliance processes.
  2. Adopt a Holistic View of Compliance. Compliance risks rarely confine themselves to one department. Breaking down silos through cross-functional process management improves visibility and reduces risk.
  3. Prioritize Data Governance. High-quality, standardized data is essential for both AI and compliance. Without it, even the best tools cannot deliver meaningful insights.
  4. Embrace Change Management. As with Mars Wrigley’s digital transformation, compliance process improvements require buy-in from leadership and employees.

The Compliance Call to Action

Compliance has been reactive for too long, focusing on addressing failures rather than preventing them. Integrating AI into process management offers an opportunity to shift that paradigm. By combining the best of technology and process management, compliance programs can reduce risk and enhance business value.

Mars Wrigley’s success story reminds us that the tools and strategies to transform compliance are available—but the onus is on compliance professionals to lead the charge. Whether through smarter risk management, better stakeholder engagement, or innovative technology adoption, the path forward is clear: process management and AI are not just operational tools; they are the future of compliance.

Now is the time to act. By adopting process management principles and leveraging AI, compliance leaders can build programs that are not only effective but also resilient, sustainable, and aligned with organizational goals. The question is no longer whether compliance should embrace these tools but how quickly they can integrate them into their processes.

By learning from companies like Mars Wrigley, compliance professionals can reimagine their programs, aligning them with the business’s needs while staying ahead of regulatory requirements.