Ed. Note: This week, we present a week-long series on the use of GenAI in a best practices compliance program. Every other day this week, I have created a one-page checklist for each article that you can use in presentations or for easier reference. However, for today’s blog post, I have made a Compliance AI Dialogue Playbook to illustrate the concepts discussed. If you would like a copy, email my EA, Jaja, at jaja@compliancepodcastnetwork.net.
Compliance officers are, at their core, problem-solvers. We wrestle with thorny questions every day: How do we implement a global gifts-and-entertainment policy across jurisdictions with vastly different cultural norms? How do we balance business pressures with anti-corruption obligations? How do we address new risks like AI itself? Traditionally, compliance officers have relied on their teams, external counsel, and regulators for perspective. But now, there is another partner available: AI as a co-thinker.
Elisa Farri and Gabriele Rosani, in their HBR article, How AI Can Help Managers Think Through Problems, argue that generative AI is not simply a productivity booster but a thought partner that can help managers frame problems, weigh trade-offs, and refine decision-making. For compliance professionals, this opens an exciting frontier. Instead of seeing AI as just a summarization or monitoring tool, we can use it to think with us about compliance challenges.
Today, we consider five key takeaways for compliance professionals, each exploring how AI can and should be trusted as a structured co-thinker in corporate compliance problem-solving.
1. AI Can Help Frame Compliance Problems More Clearly
One of the hardest parts of compliance work is problem framing. Regulators do not hand us neat checklists; instead, they give us principles, expectations, and enforcement actions. It’s up to us to translate these into workable policies and controls.
The authors highlight how AI can act as a sounding board, asking clarifying questions, offering perspectives, and reframing issues. In compliance, this is invaluable. For example, when confronting a possible books-and-records violation, you can ask AI to outline the problem from different angles: the DOJ’s perspective, the auditor’s lens, or the business unit’s operational concerns.
This “co-thinking” dialogue helps compliance officers avoid blind spots. By articulating context and criteria while AI proposes reframings or stakeholder perspectives, the problem becomes clearer. Often, clarity is half the solution.
The compliance lesson: Don’t just throw a problem at AI and expect an answer. Use it to refine the question. A well-framed compliance issue is easier to analyze, explain, and ultimately solve.
2. AI Strengthens Root Cause Analysis in Compliance Investigations
Root cause analysis is central to modern compliance. Regulators do not just want misconduct identified; they want to know why it happened and how you’ll prevent it going forward. Yet too often, root cause analysis gets bogged down in assumptions or limited perspectives.
Farri and Rosani cite managers who use AI dialogues to explore underlying causes systematically. For compliance officers, this can be a game-changer. Imagine an investigation into repeated expense-report fraud. AI can walk you through potential cultural drivers (“tone at the top,” sales pressure), structural flaws (weak approval workflows), and training gaps. It can then push back: “Are you overlooking incentives?” or “What if the issue is inadequate third-party vetting?”
By iterating through hypotheses in a structured dialogue, compliance professionals can avoid premature conclusions and dig deeper. This not only strengthens remediation but also demonstrates to regulators that the company engaged in a thorough, multi-perspective analysis.
The compliance lesson: AI co-thinking transforms root cause analysis from a static checklist into a dynamic dialogue, driving richer insights and more defensible conclusions.
3. AI Helps Anticipate Stakeholder Reactions to Compliance Decisions
Compliance isn’t just about rules; it’s about relationships. A compliance policy that looks perfect on paper can fail if stakeholders resist or misunderstand it. That’s why anticipating reactions is essential.
The article describes a communications manager who used AI to role-play stakeholder perspectives. Compliance teams can apply the same method. Suppose you’re rolling out a new third-party due diligence system. You could ask AI to simulate how sales might react (“This slows down deal velocity“), how finance might respond (“We lack resources for added checks“), and how regulators would view the process (“Demonstrates good faith risk management“).
This kind of dialogue allows compliance officers to refine messaging, anticipate objections, and design mitigation strategies before rollout. It’s essentially stakeholder mapping on steroids.
The compliance lesson: Use AI to run “compliance fire drills.” Let it act as different stakeholders, challenge your assumptions, and highlight where communication or process gaps may derail implementation. Better to hear objections from an AI simulation than from the DOJ or your workforce, after the fact.
4. AI Supports Compliance Leadership and Mindset Shifts
Compliance is not static; it evolves as risks and expectations change. One of the hardest parts of leadership is helping teams adopt new mindsets. Whether it’s embedding ESG into compliance or shifting from reactive investigations to proactive risk management, change is as much about people as it is about rules.
The authors point to managers using AI to coach teams through mindset shifts. Compliance officers can replicate this by designing AI dialogues that help teams reflect on change. For example: “Act as a compliance coach guiding a regional manager through adopting a risk-based mindset for third-party approvals.” AI can then walk the manager through scenarios, pose self-assessment questions, and suggest daily practices to internalize the change.
This turns AI into a scalable leadership development tool for compliance. It’s not replacing human mentorship but supplementing it, ensuring employees across geographies get consistent coaching.
The compliance lesson is straightforward: AI can democratize leadership development in compliance. By embedding coaching into AI assistants, compliance leaders can scale mindset change while reinforcing culture across the enterprise.
5. AI Encourages Reflective and Ethical Decision-Making
Finally, compliance is about judgment. Not every decision can be reduced to a policy or rulebook. Whether deciding how to respond to a gray-area hospitality offer or whether to self-disclose a violation, compliance officers must weigh trade-offs.
Farri and Rosani emphasize that AI, when engaged as a co-thinker, can enhance reflective decision-making. It does so by slowing us down, asking probing questions, and challenging quick assumptions. This is especially important because compliance officers are often under pressure to deliver fast answers to complex problems.
By prompting reflections such as “What risks might we be missing? What would regulators expect? What precedent are we setting? AI ensures compliance officers approach decisions with greater ethical clarity. It’s the Socratic method in digital form.
The compliance lesson: AI should not be seen as replacing compliance judgment but as sharpening it. By making space for reflection, AI helps ensure that compliance decisions are thoughtful, principled, and defensible.
From Automation to Co-Thinking
For too long, compliance has viewed AI as a back-office automation tool: summarizing, monitoring, and drafting. Farri and Rosani remind us that AI can do much more: it can think with us.
By helping frame problems, strengthening root cause analysis, anticipating stakeholder reactions, supporting mindset shifts, and fostering reflective decision-making, AI becomes not just a tool but a thought partner. For compliance officers under increasing pressure from regulators and boards, that partnership could be transformative.
The path forward is clear: stop asking “What can AI do for compliance?” and start asking “How can AI help compliance think better?”