Categories
Great Women in Compliance

Great Women in Compliance: Why Decision Rubrics Matter in the Age of AI with Hemma Lomax and Shalini Rajoo

In this conversation, GWIC host Dr. Hemma R. Lomax and Shalini Rajoo explore the critical role of decision rubrics in governance, accountability, and trust, especially in the context of AI. Shalini shares her journey from law to compliance, emphasizing the importance of understanding systems and the impact of leadership on decision-making processes. They discuss how transparency and clarity in decision-making can build trust within organizations and the necessity of responsible AI governance. Practical tips for improving decision quality are also provided, highlighting the importance of self-awareness and critical thinking in leadership.

Takeaways:

  • The biggest risk in governance is unclear decisions.
  • AI amplifies existing clarity or confusion in decision-making.
  • Systems and rules reflect the identities of their architects.
  • Everyone has an impact on those around them every day.
  • Leadership is about improving the people around you.
  • It’s not just about rules; it’s about how people behave.
  • Decision rubrics provide consistency and predictability in outcomes.
  • Transparency in decision-making processes builds trust.
  • Slowing down to ask questions can lead to better decision-making.
  • Writing down the reasons for decisions brings clarity and accountability.

Sound bites:

“Systems and rules are not inherently neutral.”

“Transparency in decision making builds trust.”

“Slow is smooth, and smooth is fast.”

Chapters:

00:00 Introduction to Decision Rubrics and Governance

02:55 Shalini’s Journey: From Law to Governance

06:09 The Impact of Systems on Leadership and Accountability

09:09 Transitioning to Compliance and Ethics

11:49 Understanding Decision Rubrics in Compliance

15:06 The Role of Leadership in Decision Making

18:03 Designing Conditions for Effective Decision Making

20:47 The Importance of Transparency in Decision Processes

24:09 Decision Rubrics: Building Trust in Organizations

26:49 AI and Governance: Leadership Infrastructure Failures

29:47 Responsible AI: The Role of Ethics and Compliance

32:55 Practical Tips for Improving Decision Quality

36:00 Conclusion: The Future of Decision Making in AI

Guest Biography:

Shalini Rajoo is the Founder and Principal Consultant of Shalini Rajoo Advisory, LLC, where she partners with organizations to design governance, compliance, and decision-making systems that are resilient, trustworthy, and aligned to real operational pressures. Across more than two decades in law, compliance, HR, and organizational leadership, Shalini has helped companies and leaders move beyond check-the-box frameworks to build structures that embed accountability, clarity, and performance into everyday decisions.

She began her career in South Africa, first as a public prosecutor and then leading regulatory work with the Department of Trade and Industry, collaborating with legislative and executive stakeholders on corporate, competition, and consumer law. After relocating to the U.S., Shalini practiced commercial litigation. She later served as Director of Global Business Conduct for a Fortune 500 company, where she redesigned ethics and compliance systems, led global risk assessments, and championed psychological safety and integrity-based practices.

Today, Shalini’s work centers on helping leaders clarify decision rights, governance architectures, and accountability pathways — especially as organizations adopt AI and automation. She recently spoke at the Opal Group’s Corporate Governance & Ethics in the Age of AI conference, where she reframed AI governance as a leadership-infrastructure challenge rather than a purely technical or compliance one.

Categories
AI Today in 5

AI Today in 5: February 10, 2026, The AI Redefining GRC Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. How AI is redefining GRC. (GulfNews)
  2. AI-assisted workforce leave compliance program. (USAToday)
  3. How to integrate AI into your compliance workflows. (AOL)
  4. How AI can speed compliance research. (FedScoop)
  5. Data sovereignty for AI compliance. (TechTarget)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Daily Compliance News

Daily Compliance News: February 10, 2026, The Athletes, Injuries and Ethics Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Prediction markets v. casinos at war over gambling. (NYT)
  • Banks want ‘pound of flesh’ in RTO. (FT)
  • Who gets to decide when athletes should not compete? (Reuters)
  • Google staff call for the company to cut ties with ICE. (BBC)
Categories
Innovation in Compliance

Innovation in Compliance – Proactive Compliance Frameworks for Evolving AI Regulations with Yakir Golan

Innovation occurs across many areas, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, host Tom Fox welcomes Yakir Golan, CEO & Co-founder at Kovrr, who shares his professional journey from the Israeli intelligence community to his current role at Kovrr.

With a rich background in Israel’s intelligence community and significant experience with cybersecurity vendors, Golan champions integrating frameworks with analytics to effectively assess and navigate risks, emphasizing governance as a vital component for sustained innovation. He advocates proactive measures to address AI-enabled insider threats, urging businesses not to wait for perfect regulatory clarity amid the fast-paced evolution of AI technologies. Golan’s holistic approach to compliance transcends mere regulatory adherence, focusing on business-driven proficiency in cybersecurity and AI to meet the dynamic demands of the business landscape.

 

Key highlights:

  • Financial Models for AI Risk Governance
  • Enhancing AI Governance with Adaptive Frameworks
  • Empowering Innovation Through Strategic Governance and Compliance
  • Unified Approach: AI-Cybersecurity in Enterprise Risk Management

Resources:

Yakir Golan on LinkedIn

Kovrr 

Innovation in Compliance was recently ranked Number 4 in Risk Management by 1,000,000 Podcasts.

Categories
Blog

From Enforcement-Driven to Purpose-Driven Compliance

For more than two decades, corporate compliance programs have been built around one central organizing principle: enforcement. Where regulators go, compliance resources follow. When the Department of Justice prioritizes anticorruption, companies invest in FCPA controls. When regulators turn to privacy, cybersecurity, or AML, compliance budgets pivot accordingly. This enforcement-driven approach has shaped the modern compliance profession.

Yet, as Veronica Root Martinez persuasively argues in her recent working paper, Purpose-Driven Compliance, this dominant model may be fundamentally flawed, certainly in the era of Trump.  Despite unprecedented investments in compliance infrastructure, corporate misconduct persists. Repeat offenders remain common. Penalties grew larger, but behavior did not meaningfully improve. For compliance professionals, this raises an uncomfortable question: are we optimizing for the wrong objective?

Martinez’s answer is both challenging and clarifying. Compliance programs should not be primarily designed to satisfy enforcement authorities or to maximize mitigation credit after failure. Instead, they should be anchored in the organization’s own purpose, business risks, and ethical standards. In short, it is time to move from enforcement-driven compliance to purpose-driven compliance.

The Limits of Enforcement-Driven Compliance

The enforcement-driven model rests on two assumptions. First, that enforcement priorities reflect a company’s most significant risks. Second, that imperfect compliance is inevitable and acceptable so long as the organization can demonstrate good-faith efforts. Martinez brings both under scrutiny.

Regulatory priorities often lag behind real business risks. Enforcement agencies focus on certain categories of misconduct because they are visible, politically salient, or historically entrenched. But the risks that most threaten an organization’s mission may lie elsewhere. Martinez highlights how firms can become over-invested in compliance areas that attract enforcement attention while under-investing in mission-critical risks to their operations.

The second assumption, that some level of misconduct is acceptable, is even more troubling. Behavioral ethics research suggests that tolerating small violations creates conditions for larger ones. When leaders frame misconduct as statistically insignificant or “within expectations,” they risk normalizing behavior that undermines culture, trust, and ultimately performance. Wells Fargo’s infamous “1% problem” illustrates this danger. Senior leadership took comfort in the idea that only a small fraction of employees were engaging in misconduct, failing to appreciate that those numbers reflected only the misconduct that had been detected.

An enforcement-driven mindset encourages this type of thinking. If the organization is sanctioned, then low detection rates look like success. But if the question is whether the organization is living up to its own purpose and values, the same data tell a very different story. This is not the broken windows theory of enforcement, but something else.

The Cost of Treating Compliance as a Cost of Doing Business

Another weakness of enforcement-driven compliance is that it can turn sanctions into a predictable line item. As firms grow larger and penalties are discounted through cooperation credit, fines risk being internalized as a cost of doing business. Empirical work cited by Martinez suggests that large, repeat offenders often pay penalties that are small relative to their assets and revenues. In that environment, enforcement loses much of its deterrent effect.

For compliance professionals, this dynamic creates a structural tension. Programs may be technically “effective” under DOJ guidance while still failing to prevent misconduct that harms customers, employees, and communities. The distinction between standards of review and standards of conduct becomes critical. Meeting the government’s expectations for leniency is not the same as meeting the organization’s ethical obligations to itself and its stakeholders.

What Is Purpose-Driven Compliance?

Purpose-driven compliance begins with a simple but powerful shift in perspective. Instead of asking, “What does the regulator expect?” the organization asks, “What risks threaten our ability to achieve our purpose and what standards of conduct are required to address them?” Martinez defines purpose-driven compliance as programs directed by three elements: the firm’s purpose, the inherent risks associated with pursuing that purpose, and the ethical standards the organization sets for itself. This approach does not reject enforcement frameworks; rather, it treats them as a floor, not a ceiling.

In practical terms, purpose-driven compliance requires leadership to articulate why the organization exists and how misconduct undermines that mission. For a bank, this may mean focusing on customer trust and market integrity. For a pharmaceutical company, it may mean prioritizing patient safety and scientific integrity. For a university, it may mean safeguarding academic freedom and institutional trust. For a summer camp, it means protecting the campers from flash floods and other storms.

Once the purpose is clearly defined, compliance risk assessments become more meaningful. Risks are evaluated not only by enforcement exposure but by their potential to compromise the organization’s core objectives. This reframing helps compliance leaders resist the temptation to chase regulatory trends at the expense of mission-critical risks.

Moving Beyond Mitigation to Aspirational Standards

A key insight in Martinez’s work is that firms often confuse mitigation with excellence. Compliance programs are designed to minimize penalties rather than to maximize ethical performance. Purpose-driven compliance challenges that mindset by encouraging organizations to adopt high, ethical, and aspirational standards of conduct.

This does not mean pursuing perfection through draconian controls or internal criminalization. Martinez rightly warns against overdeterrence and strict liability regimes that incentivize concealment rather than transparency. Instead, purpose-driven compliance emphasizes ethical framing, employee voice, and organizational learning. Compliance should never be Dr. No, sitting in the Department of Business Non-Development.

The examples of Wells Fargo and Novartis are instructive. Both organizations suffered repeated compliance failures under enforcement-driven regimes. Their subsequent reforms went beyond addressing the specific violations that triggered enforcement. They re-examined culture, leadership incentives, and ethical expectations. In Novartis’s case, tying bonuses to ethical performance and co-creating a new code of ethics signaled a shift from box-checking to values anchored in purpose.

Why Purpose-Driven Compliance Matters for the Modern CCO

For today’s chief compliance officer, Martinez believes purpose-driven compliance offers three critical benefits.

First, it creates durability. Enforcement priorities shift with administrations. Indeed, this Administration has signaled a cutback in white-collar enforcement by offering essentially get-out-of-jail-free cards to companies that self-disclose early. This underscores the importance of compliance programs. A compliance program anchored solely in regulatory expectations will always be reactive. Purpose-driven programs are more stable because they are tied to the organization’s identity rather than external politics.

Second, it improves the quality of compliance metrics. Measuring effectiveness against internal standards allows organizations to ask harder questions about culture, decision-making, and root causes. Not every initiative will succeed, but a willingness to acknowledge failure is itself a sign of program maturity.

Third, it enhances credibility with boards and senior leadership. When compliance is framed as a strategic partner in achieving the organization’s mission, rather than as a defensive function, it earns a more meaningful seat at the table.

Conclusion

Compliance has never been more sophisticated, expensive, or visible. Yet sophistication alone does not guarantee effectiveness. Martinez’s Purpose-Driven Compliance challenges compliance professionals to rethink the foundations of their programs. Enforcement-driven compliance has taken us far, but it cannot take us far enough.

The next evolution of compliance requires organizations to define their own standards of conduct, grounded in purpose, risk, and ethics. That shift is not easy. It requires courage from compliance leaders and commitment from boards and executives. But if compliance is truly about preventing harm and sustaining trust, purpose-driven compliance is not optional. It is essential.

Categories
The PfBCon Podcast

The PfBCon Podcast: Leveraging Podcasting for Building Personal Brands with Brent Carlson and Mike Huneke

Brent Carlson and Mike Huneke from the Red Flags Rising podcast discuss their journey in building personal brands through podcasting.

Brent, a Principal and Chief Solutions Officer at Red Flags Rising Solutions, LLC, and Mike, a Partner at Morgan, Lewis & Bockius, share insights on using podcasting to create authentic connections, expand professional networks, and address timely industry topics. They emphasize the importance of quality, consistency, and having a genuine passion for your subject matter. The discussion also covers practical tips on choosing the right equipment, the benefits of LinkedIn for promotion, and the value of having a unique style that resonates with your audience. The episode underscores the impact of podcasting on establishing professional credibility and on driving business growth.

Key highlights:

  • The Power of Podcasting for Personal Branding
  • Building Connections and Community
  • Starting the Red Flags Rising Podcast
  • Creating Engaging and Practical Content
  • Promoting Your Podcast Effectively
  • Investing in Quality Equipment
  • Using Podcasts for Business Development

Resources:

Follow Brent and Mike on LinkedIn:

Brent Carlson

Mike Huneke

Follow Red Flags Rising on:

Website

Spotify

RSS. Com

YouTube

Compliance Podcast Network

Apple Podcast

Listen Notes

Categories
FCPA Compliance Report

FCPA Compliance Report – FCPA Enforcement Shifts: Volatility and Uncertainty

Welcome to the award-winning FCPA Compliance Report, the longest-running podcast in compliance. In this episode,  host Tom Fox welcomes Anik Shah, Director & Senior Legal Counsel at Sandisk, for an insightful discussion about the pivotal changes and enforcement actions around the FCPA in 2025 and their implications for 2026.

In 2025, Anik Shah, a preeminent authority on FCPA and anti-corruption enforcement, offers a strategic perspective on the evolving compliance landscape. Given the recent uncertainties following an executive order and the dismissal of high-profile cases, Shah underscores the necessity for companies to maintain robust anti-bribery and anti-corruption controls, especially with potential reprioritization by the Department of Justice. He advocates a proactive risk management approach, emphasizing the importance of third-party risk management and comprehensive training to anticipate and mitigate potential FCPA issues. As enforcement focus shifts toward addressing cartel and transnational criminal organization activities, Shah advises companies to integrate anti-money laundering processes into their compliance strategies to align with global anti-corruption efforts.

Key highlights:

  • 2025 FCPA Enforcement Shifts and Uncertainty
  • Voluntary Self-Disclosure Policy Revolution in 2025
  • Cartel Risk Mitigation through Compliance Integration
  • Central Asia Construction Projects: Anti-Corruption Measures
  • Proactive Measures: Fostering Anti-Corruption Compliance Awareness

Resources:

Anik Shah on LinkedIn

Sandisk

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Returning to Venezuela on Amazon.com

Categories
Daily Compliance News

Daily Compliance News: February 9, 2026, The Is Netflix a Monopoly Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Knock off obesity pill pulled from market. (NYT)
  • Former Norwegian Prime Minister under investigation over corruption from Epstein files. (Politico)
  • Jay Clayton promises a bigger get out of jail free card. (Reuters)
  • DOJ to investigate if Netflix is a monopoly. (WSJ)
Categories
AI Today in 5

AI Today in 5: February 9, 2026, The AI Agents Doing Compliance Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. What to do when AI is forced on compliance. (CW)
  2. Napier AI/AML report is out. (FinTechGlobal)
  3. AI and the accountability gap. (FinTechGlobal)
  4. Where AI is tearing through corporate America. (WSJ)
  5. Goldman is letting AI Agents do compliance. (PYMNTS)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Blog

From Principle to Proof: Operationalizing AI Governance Through the ECCP and NIST

Artificial intelligence governance has officially crossed the threshold from theory to expectation. The Department of Justice has not issued a standalone “AI rulebook,” but it has provided a framework for compliance professionals to consider the issue: the 2024 Evaluation of Corporate Compliance Programs (ECCP). In this version of the ECCP, the DOJ laid out guidance that any technology capable of creating material business risk must be governed, monitored, and improved like any other compliance risk. That includes artificial intelligence.

Too many organizations still treat AI governance as an ethics exercise, a technical problem, or a future concern. That posture is not defensible. The DOJ does not ask whether your program is fashionable or aspirational. It asks three very old-fashioned questions: Is your compliance program well designed? Is it applied in good faith? Does it work in practice? Those questions apply with full force to AI.

In this post, I want to move the discussion from abstract frameworks to operational reality. I will show how compliance professionals can use the ECCP to structure AI governance, select board-grade KPIs, and demonstrate effectiveness in a way regulators understand. I will also show how the NIST AI Risk Management Framework (NIST Framework) fits neatly underneath this structure as an operating model, not a competing philosophy.

AI Governance Is Already an ECCP Issue

The DOJ has repeatedly emphasized that compliance programs must evolve as business risks evolve. Artificial intelligence is not a future risk. It is already embedded in pricing, hiring, credit decisions, customer interactions, fraud detection, and third-party screening. If an AI model can influence revenue, customer outcomes, or regulatory exposure, it is a compliance risk. Period.

The ECCP does not require companies to eliminate risk. It requires them to identify, assess, manage, and learn from it. AI governance, therefore, belongs squarely inside the compliance program, not off to the side in an innovation lab or technology committee.

The ECCP as an AI Governance Blueprint

The power of the ECCP is its simplicity. Every enforcement action ultimately traces back to the same three questions. Let us apply them directly to AI.

Is the Program Well Designed?

Design begins with risk assessment. If your organization cannot answer a basic question such as “What AI systems do we have, who owns them, and what decisions they influence,” you do not have a program. You have hope. A well-designed AI compliance program starts with an AI asset inventory that identifies models, tools, vendors, and use cases. Each asset must be risk-classified based on business impact, regulatory exposure, and potential harm.

Board-level KPIs here are coverage metrics. How many AI assets have been identified? What percentage has been risk-classified? How many high-impact models have completed an impact assessment before deployment? If your dashboard does not show near-full coverage, the design is incomplete.

Policies and procedures come next. The DOJ does not care how many policies you have. It cares whether they provide clear guidance for real decisions. AI policies should cover the full lifecycle, from design and data sourcing through deployment, monitoring, and retirement. A practical KPI is policy coverage. What percentage of AI assets operate under current, approved procedures? How often are those procedures refreshed? Annual updates are a reasonable baseline in a rapidly changing risk environment.

Is the Program Applied Earnestly and in Good Faith?

Good faith is demonstrated through action, not intent. Training is a central indicator. The DOJ expects role-based training tailored to actual risk. A generic AI awareness course does not meet this standard. Developers, model owners, compliance reviewers, and business leaders all require different training. Completion rates matter, but so does comprehension. Measuring post-training proficiency improvement is one of the clearest signals that training is more than a box-checking exercise.

Third-party risk management is another critical area. Many organizations rely on external models, data providers, or AI-enabled vendors. If you do not understand how those tools are built, governed, and updated, you are importing risk without controls. Strong programs use standardized AI diligence questionnaires, assign assurance scores, and require contractual safeguards for high-risk vendors. A board-ready KPI here is the percentage of high-risk AI vendors subject to enhanced diligence and contractual controls.

Mergers and acquisitions deserve special attention. AI risk does not wait for post-close integration. The DOJ has been explicit that pre-acquisition diligence matters. A defensible KPI is simple and unforgiving. 100% of acquisition targets with material AI usage must undergo AI due diligence before closing. Anything less invites inherited risk.

Does the Program Work in Practice?

This is where many programs fail. Paper controls do not impress regulators. Outcomes do. Incident reporting is a critical signal. A low number of reported AI issues may indicate fear, confusion, or a lack of safety rather than safety concerns. What matters is whether issues are identified, investigated, and resolved promptly. Mean time to investigate is a powerful metric. If AI-related concerns take months to resolve, the program is not working. Clear escalation paths, defined investigation playbooks, and documented root cause analysis are essential.

Continuous monitoring is equally important. High-risk AI systems must be monitored for performance drift, data changes, and unintended outcomes. The DOJ expects companies to use data analytics to test whether controls are functioning. KPIs here include validation pass rates before deployment, drift-detection coverage for critical models, and corrective action closure rates. These are not technical vanity metrics. They are evidence of effectiveness.

Where NIST Fits and Why It Matters

The NIST AI Risk Management Framework does not compete with the ECCP. It operationalizes it. The ECCP tells you what regulators expect. NIST helps you implement those expectations across governance, mapping, measurement, and management. For example, ECCP risk assessment aligns with NIST’s mapping function. ECCP’s continuous improvement aligns with NIST’s measurement and management functions. Using NIST terminology creates a shared language across compliance, legal, security, and data science teams. That shared language is governance in action.

Reporting AI Risk to the Board

Boards do not want technical detail. They want assurance. The most effective AI governance dashboards focus on a small set of indicators that answer the DOJ’s three questions: coverage, quality, responsiveness, and learning. Examples include the percentage of AI assets risk-classified, validation pass rates, investigation cycle times, and corrective action closure rates. When these metrics move in the right direction, they tell a credible story of control. More importantly, they show that compliance is not reacting to AI. It is governing it.

Five Key Takeaways for Compliance Professionals

  1. AI as Risk. Artificial intelligence is already within the scope of the ECCP. If AI can influence business outcomes, it must be governed like any other compliance risk.
  2. Risk Management Program. A well-designed AI compliance program begins with complete asset identification and risk classification. Coverage metrics are the first signal regulators will examine.
  3. Implementation. Good faith implementation is demonstrated through role-based training, disciplined third-party oversight, and pre-acquisition AI diligence. Intent without execution does not count.
  4. Outcomes, not Inputs. Effectiveness is proven through outcomes. Investigation speed, monitoring coverage, and corrective action closure rates matter more than policy volume.
  5. Complementary. The NIST Framework complements the ECCP by providing an operating model that compliance, legal, and technical teams can share. Together, they turn principles into proof.

Final Thoughts

AI governance is not about predicting the future. It is about demonstrating discipline in the present. The DOJ is not asking compliance professionals to become data scientists. It is asking us to do what they have always done well: identify risk, establish controls, test effectiveness, and improve continuously. The ECCP already gives you the framework. The only question is applying it.