Categories
Blog

Why AI Demands a New Breed of Leaders: A Compliance Perspective

Artificial intelligence is no longer a distant future state for compliance teams. It is here, operating inside financial crime platforms, powering third-party due diligence tools, driving monitoring engines, and influencing the everyday judgments that regulators scrutinize. Yet too many companies still approach AI as if it were simply another IT project. In a recent Sloan Management Review article, Why AI Demands a New Breed of Leader,” the authors, Faisal Hoque, Thomas H. Davenport, and Erik Nelson, argue that successful AI transformation is far more about people, culture, and leadership than about code.

For compliance professionals, that should sound familiar. Every major enforcement action of the last decade has shown that failure rarely begins with a faulty system. Failure begins with leadership that misunderstands risk, a culture that resists change, and governance frameworks that cannot keep pace with new technologies.

The authors argue that modern organizations require a new category of leader to guide AI adoption, a role that blends technical capability with cultural stewardship, ethical understanding, and organizational change management. They call this the Chief Innovation and Transformation Officer (CITO) or an equivalent title. Whether companies formally adopt the title or not, the message is unmistakable: AI changes the leadership equation, and compliance has a front-row seat.

Why Traditional Technology Leadership Is No Longer Enough

While CIOs are increasingly viewed as changemakers, they often lack the time and mandate to address the organizational disruption AI brings. Compliance officers understand this problem intuitively. You can have the most sophisticated tools in the world, but if the culture is not ready for them, the result will be chaos or even misconduct. The authors cite survey data showing that 91 percent of large-company data leaders believe cultural issues, not technical ones, are blocking progress. That finding mirrors what compliance sees in every DOJ corporate enforcement action. Misconduct thrives not because technology fails, but because people and processes fail.

The article also includes examples of organizations that stumbled by treating AI as a purely technical deployment. The Zillow pricing model collapsed. The swift employee backlash at California State University. The Air Canada chatbot that mishandled bereavement fare guidance. Each case reveals the same lesson: AI without governance becomes a liability. For compliance professionals evaluating AI adoption, these examples should resonate. AI raises questions about transparency, fairness, documentation, accountability, and the human impact of automation. Those are governance issues, not engineering puzzles.

The New Leadership Model AI Demands

The authors describe several competencies required for effective AI leadership, all of which map directly into compliance priorities:

Navigating ethical considerations.

AI introduces bias, harm, and fairness risks, all of which are central concerns for regulators. Leaders must weigh efficiency gains against ethical boundaries.

Driving cultural transformation.

AI adoption changes workflows, reporting lines, incentives, and human-machine collaboration. Leadership must prepare the workforce for new models of decision-making.

Managing human-AI partnerships.

The near-future compliance program will rely on co-decision systems that combine algorithmic outputs with human judgment. Leaders must understand how to balance the two.

Breaking down silos.

AI implementation touches HR, legal, IT, operations, procurement, and compliance. Leadership must connect these functions rather than allow fragmented approaches.

Overseeing citizen development.

Employees across the business can now build AI models without IT involvement. That democratization requires governance and guardrails.

These competencies go far beyond traditional CIO responsibilities. They lean toward behavior, judgment, and organizational change, the same strengths compliance brings to the table.

Emerging Executive Roles Around AI

The article documents the rapid rise of AI-focused executive roles such as Chief Innovation Officer, Chief AI Officer, and Chief Transformation Officer. Compensation is rising, hiring is accelerating, and responsibilities increasingly blend technology, ethics, culture, and strategy.

The authors highlight examples:

  • PepsiCo’s Chief Strategy and Transformation Officer is overseeing enterprise-wide digitization.
  • Standard Chartered’s Chief Transformation, Technology, and Operations Officer.
  • JPMorgan Chase’s governance model for IndexGPT and AI-driven investment analysis.

These roles share a common trait: they embed ethics, cultural change, and strategic alignment directly into AI governance. This direction should reassure compliance officers. Regulators have signaled that they expect AI oversight to be integrated, accountable, and verifiable. A dedicated AI leadership role can help unify these obligations.

AI Persona Management: The Next Frontier of Governance

One of the most intriguing sections of the article describes “AI persona management,” the oversight of digital agents with defined personalities, roles, and decision-making authority. As AI becomes more autonomous, these personas may behave like digital employees. That raises profound governance questions.

Compliance professionals should begin considering:

  • What decision rights will AI personas have?
  • How will we document their logic?
  • How will we audit their behavior?
  • How will we ensure ethical consistency across different personas?

The authors note that Salesforce already uses AI personas internally to guide product decisions. That should serve as a signal: AI agents are not a theoretical concept; they are entering the enterprise now. A compliance professional will need to treat AI personas with the same seriousness as human employees, subject to monitoring, training, policies, escalation channels, and accountability structures.

What This Means for Corporate Compliance Leaders

The article argues that companies must rethink how they manage technology change. AI’s impact is too broad to remain confined to the IT organization. Talent, culture, ethics, governance, and risk management all intersect. The authors present the CITO role as the logical solution for a leader who integrates technical fluency with organizational psychology and ethical judgment.

From a compliance standpoint, this represents both an opportunity and a responsibility. The opportunity is clear: compliance brings exactly the kind of cross-functional, ethics-driven perspective AI leadership requires. The compliance function knows how to document decisions, manage cultural change, develop defensible processes, and build controls around complex risks.

The responsibility is equally clear: AI will soon permeate every corner of the enterprise. If compliance does not assert its role in governance, the organization will drift toward risk. This article provides a roadmap for what strong governance must look like. It tells companies that AI success demands a leader capable of bridging technical, ethical, and cultural domains, the very domains compliance has long mastered.

Now is the moment for compliance to claim its seat at the AI leadership table, helping shape the systems that will define operational and ethical performance for years to come.

Categories
Daily Compliance News

Daily Compliance News: December 1, 2025, The Fraud at Chelsea Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • New York State could be a battleground for AI regulation. (NYT)
  • Chelsea employee admits to fraud. (BBC)
  • More protests on Philippine corruption. (Bloomberg)
  • Insurer pulling back from the cyber market. (FT)

The Daily Compliance News has been honored as No. 2 in the Best Regulatory Compliance Podcasts category.

Categories
AI Today in 5

AI Today in 5: December 1, 2025, The Transforming Due Diligence Edition

Welcome to AI Today in 5, the newest edition of the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. 3 keys to AI in banking. (Financial Brand)
  2. New York State could be a battleground for AI regulation. (NYT)
  3. Agentic AI for hackers. (FT)
  4. Shadow AI to digital disruption. (Digital Journal)
  5. How AI is transforming due diligence. (FinTechGlobal)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com

Categories
AI Today in 5

AI Today in 5: November 21, 2025, The Who Audits Open AI Edition

Welcome to AI Today in 5, the newest edition of the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Compliance grade AI. (BusinessWire)
  2. New compliance AI for investment managers. (Cision)
  3. Who audits OpenAI? (FT)
  4. Trump wants to ban all state AI regulation. (NBC)
  5. FinTech wants a united front against cybercrime. (ComputerWeekly)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com

Categories
Compliance and AI

Compliance and AI: Steph Holmes on the Intersection of AI and Compliance

What is the intersection of AI and compliance? What about Machine Learning? Are you using ChatGPT? These questions are just three of the many we will explore in this cutting-edge podcast series, Compliance and AI, hosted by Tom Fox, the award-winning Voice of Compliance. Today, Tom looks at the current Intersection of AI and Compliance with Steph Holmes, a long-time friend and Director, Ethics and Compliance Strategy at the EQS Group.

They discuss the evolving role of AI in corporate compliance, emphasizing its key role in modernizing compliance programs. Steph elaborates on the importance of evidence-based assessments of AI capabilities, the impact of AI on operational efficiency, and the need for human oversight in AI processes. She highlights EQS’s comprehensive AI performance test, which evaluated various AI models against multiple compliance tasks. The discussion also covers practical steps for compliance professionals to begin their AI adoption journey, as well as the necessity of continuous monitoring and risk-based evaluation to ensure effective AI deployment.

Key highlights:

  • Steph Holmes’ Role at EQS Group
  • AI in Compliance: Current Landscape
  • AI Performance Test Report
  • The Messy Middle of Compliance and AI
  • Human Oversight in AI Implementation

Resources:

Steph Holmes on LinkedIn

EQS Group LinkedIn

Where in the Loop: Corporate Compliance Insights

EQS Website

EQS Benchmark Report: AI Performance in Compliance & Ethics

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
AI Today in 5

AI Today in 5: November 20, 2025, The Gemini 3 Edition

Welcome to AI Today in 5, the newest edition to the Compliance Podcast Network. Each day, I will bring to you 5 stories about AI stories to start your day. Sit back, enjoy a cup of morning coffee and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day we consider four stories from the business world, compliance, ethics, risk management, leadership or general interest about AI.

  1. AI and real-world real estate compliance. (HousingWire)
  2. Replacing manual cyber compliance with AI. (JerusalemPost)
  3. Gemini 3 was released. (Google)
  4. Will AI deepen inequality and hasten war? (NBC)
  5. AI and governance overhauling AML. (FinTechGlobal)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com

Categories
AI Today in 5

AI Today in 5: November 19, 2025, The Turning No into Flow Edition

Welcome to AI Today in 5, the newest edition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Of APIs and AI. (Forbes)
  2. Will 2026 redefine GenAI and compliance risk? (PR Newswire)
  3. Energy is key for AI’s next chapter. (Trading View)
  4. New report on the CEO’s Guide to AI Transformation. (AINews)
  5. Teaching students to shape AI. (BusinessInsiderAfrica)

For more information on the use of AI in Compliance programs, see my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com

Categories
Great Women in Compliance

Great Women in Compliance – Building Trust at the Speed of Technology

In this episode of Great Women in Compliance, co-host Dr. Hemma Lomax welcomes Shannon Ralich, Vice President of Compliance and Chief Privacy Officer at Machinify, to discuss the evolving landscape of data privacy, cybersecurity, and responsible AI.

Shannon shares her remarkable journey from a curious child taking apart electronics to a seasoned leader blending technology, law, and strategy. She offers insight into how curiosity and creativity can fuel governance excellence and explains what it means to design systems that anticipate risk and enable responsible innovation.

Together, Hemma and Shannon explore:

  • How privacy and cybersecurity intersect in today’s fast-evolving AI environment
  • The most pressing compliance challenges around data governance and global regulation
  • Lessons from the SolarWinds and Uber cases and the growing conversation around individual accountability for CISOs and compliance leaders
  • Practical steps for staying agile—through reliable news sources, cross-functional camaraderie, and professional networks
  • How to translate corporate compliance skills into meaningful community impact through nonprofit leadership and animal rescue advocacy

Shannon’s message is a powerful reminder that the best leaders bring their full selves to the work: technical precision, ethical clarity, and human compassion.

Biography:

Shannon Ralich is the Vice President of Compliance and Chief Privacy Officer at Machinify, a healthcare intelligence company applying AI to improve the efficiency and integrity of healthcare payments. With more than 20 years of experience across legal, compliance, privacy, and cybersecurity roles, Shannon specializes in aligning governance frameworks with business innovation.

She also serves on the Advisory Board of the Privacy Bar Section of the IAPP (International Association of Privacy Professionals). She is widely respected for her strategic, forward-thinking approach to data protection and responsible AI governance.

Beyond her professional expertise, Shannon is a passionate advocate for animal welfare. She sits on the Board of Directors for the Neuse River Golden Retriever Rescue, where she leverages her operational and technological skills to strengthen fundraising, improve systems, and support global rescue missions.

A lifelong learner and self-described “builder,” Shannon finds creativity and grounding through woodworking, outdoor adventures with her family, and contributing to causes that make both workplaces and communities more humane.

Note: The views expressed in this podcast are our own and do not represent the views of our employers, nor should they be taken as legal advice in any circumstances. 

Categories
AI Today in 5

AI Today in 5: November 18, 2025, The Project Prometheus Edition

Welcome to AI Today in 5, the newest edition of the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Transparency and AI compliance. (FinTechGlobal)
  2. AI can deliver smarter, safer reg compliance. (FinTechGlobal)
  3. Should you keep AI away from teachers? (WSJ)
  4. Bezos joins the AI crowd with Project Prometheus. (NYT)
  5. AI can’t do therapy, but can it help therapists? (USNews)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com

Categories
Blog

Embedding Ethics in the AI Lifecycle

Embedding ethics into the AI lifecycle is not an abstract exercise. It is a practical, repeatable discipline that mirrors the work of corporate compliance. It requires a structured approach, clear accountability, and documented evidence of good governance. Most importantly, it requires compliance professionals to be at the table from the very beginning. Ethics in AI cannot be retrofitted at the end of a development cycle. It must be built in from step one.

Today, I want to examine the ethical checkpoints at each stage of the AI lifecycle and highlight where corporate compliance functions must lead. The goal is to help you build a stronger, more resilient program while demonstrating to regulators and stakeholders that your AI governance is real and operational.

Ethics in Data Sourcing

All ethical AI begins with ethical data. You cannot build a responsible model on a flawed or contaminated foundation. Data sourcing is the earliest point at which compliance becomes critical. First, ensure that the lawful basis, ownership, and rights of use are fully documented for every dataset. This is both an ethical issue and a regulatory one. Next, require a structured review for Personal Identifiable Information (PII) and Protected Health Information (PHI). If the dataset contains sensitive personal information, ensure minimization and purpose limitation principles are applied.

Ethical review also requires looking beyond legality. You must ask a deeper question: does this data reflect the populations on whom the model will act? If certain groups are underrepresented or misrepresented, there is a direct ethical and operational risk. This is where compliance can partner with data teams to conduct bias hotspot reviews and remediation before training begins.

Ethics in Model Training

Once data enters the training process, the focus shifts to how the model is built. Ethical model training emphasizes transparency, reproducibility, and clear accountability. For compliance professionals, this is a familiar structure. At the beginning of training, require a Model Card version zero. This document describes the model’s intended purpose, its users, and its limitations. Think of it as the model’s job description and risk profile. Without this baseline documentation, the organization has no ethical framework for evaluating the model later.

Compliance should also ensure the organization maintains a training bill of materials. Regulators and external auditors will expect clarity on what data, tools, seeds, configurations, and models fed the system. Ethical governance means that if something goes wrong, the organization can retrace its steps and identify the source. Finally, ensure that risks identified during training are assigned owners in the risk register. Ethical accountability requires clear signatures, not vague acknowledgment.

Ethics in Validation and Testing

No model should progress to deployment without a rigorous validation and ethical safety review. At this stage, you are no longer asking whether the model works. You are asking whether the model works in a way that is fair, safe, compliant, and aligned with corporate values. Compliance professionals should insist on structured red teaming for safety, privacy leakage, and discriminatory outputs. Ethical governance requires testing for misuse and unintended consequences, not simply functional performance.

Equally important is the articulation of pass/fail thresholds aligned with the organization’s risk tolerance. If a model shows drift toward unethical outcomes during testing, the organization must be prepared to pause or rework it. Ethics without enforcement is merely a suggestion. Legal review is also essential at this stage. Intellectual property rights, export controls, sector regulations, and customer contract obligations must all be considered. The organization’s ethical responsibility extends to ensuring its models do not inadvertently violate the law or expose users to regulatory scrutiny.

Ethics in Deployment

Deployment is the point at which AI moves from the laboratory to real-world use. Ethical deployment requires safeguards that prevent inappropriate access, misuse, confusion, or misinterpretation. Role-based and environment-based access controls are essential. No one should have access to modify or use a model unless there is a documented business justification. Ethical governance also requires that user disclosures clearly explain the model’s capabilities, limitations, and data use practices. Users should never be misled into believing a system can do something it cannot.

Canary rollouts and automated rollback mechanisms are additional ethical guardrails. They allow organizations to detect unintended consequences early and reverse course before harm spreads widely. Compliance should also ensure that third-party vendors and service providers follow equivalent ethical and governance controls. You are ultimately responsible for the ethical risks you outsource.

Ethics in Monitoring

Ethical oversight does not end when a model goes live. Ongoing monitoring is essential. Models drift. Data shifts. User populations change. A model that was ethical yesterday can become problematic tomorrow. Ethical monitoring means tracking for bias, accuracy degradation, safety issues, and misuse. It also implies routing alerts not only to engineering, but directly to compliance and risk. Ethics is not solely a technical matter. It is a governance responsibility.

Incident response is another ethical requirement. Organizations must maintain a defined, repeatable process for identifying, containing, and resolving AI-related harms. If something goes wrong, you must be prepared to act quickly and transparently.

Ethics in Governance

Finally, ethics must be embedded in the organization’s AI governance structure. Ethical AI cannot depend solely on goodwill or ad hoc decision-making. Clear role definitions, evidence documentation, and leadership engagement must support it. A formal Responsible, Accountable, Consulted, and Informed (RACI) structure for each lifecycle stage ensures accountability. Board-level reporting ensures visibility. Annual independent audits ensure credibility. Ethical AI requires not only doing the right thing but also demonstrating it.

As with all compliance disciplines, documentation is your first line of defense. Maintain Model Cards, testing evidence, monitoring logs, and decision memos. Ethical governance cannot be proven without records. The work is ongoing and iterative. Ethical AI is not a destination. It is a continuous commitment woven into every operational step. Compliance professionals are uniquely suited to lead this work because we understand systems, controls, and organizational behavior. Ethical AI is compliance by another name.

Five Key Takeaways for the Compliance Professional

1. Ethical AI begins with ethical data. Ethical governance always starts with the quality, origin, and integrity of the data used to train and inform an AI system. Inaccurate, incomplete, unlawfully sourced, or unrepresentative data introduces bias and distortion before a single line of code is written. Compliance professionals must ensure that lawful bases, consent, ownership, and use rights are fully documented, and that sensitive information is minimized and properly protected. Ethical data sourcing also requires evaluating demographic representation and identifying potential bias hotspots early. When data is handled ethically, the entire lifecycle is strengthened, reducing long-term operational, regulatory, and reputational risks.

2. Documentation is an ethical control. Good documentation is not busywork. It is the backbone of ethical AI and a critical indicator of organizational seriousness. Model Cards provide transparency regarding purpose, intended users, limitations, and performance boundaries. Risk registers assign ownership and ensure accountability throughout development, deployment, and monitoring. Audit trails create the evidentiary record that regulators and external stakeholders expect when evaluating whether decisions were responsible, compliant, and well-governed. Without documentation, an organization cannot show that it understood the risks of a model or acted responsibly in response to them. Ethical AI requires a traceable, repeatable set of records that tells a clear story of control and oversight.

3. Ethical validation requires testing. Validation is often treated as a technical gate, but ethical AI requires a far broader examination of how a model behaves under real-world stress. Compliance teams must ensure models are exposed to adversarial testing, red-team challenges, privacy leak assessments, and discrimination checks. A model that performs with high accuracy in ideal conditions may fail ethically when confronted with edge cases or bad actors. Ethical validation demands looking not only at what the model is designed to do, but at what it might inadvertently do. Only by testing for harm, misuse, and unanticipated outcomes can organizations prevent downstream risks and protect users.

4. Deployment must include safeguards. Ethical deployment is the bridge between controlled development environments and unpredictable real-world use. Safeguards such as role-based access controls, environment segregation, and capability restrictions ensure the model is used appropriately. User disclosures prevent misunderstanding by making limitations, risks, and data practices clear. Deployment controls must also account for third parties. If a vendor, integrator, or partner interacts with the model, they must uphold equivalent governance standards. Ethical responsibility does not end at the organizational boundary. Compliance oversees these safeguards to ensure that the model behaves as expected, users are not misled, and vulnerabilities are not introduced through poor operational controls.

5. Ethical monitoring is continuous. Ethics in AI is not solved at launch. Models evolve as data, user behavior, and external conditions shift. Continuous monitoring detects drift, reintroduction of bias, system degradation, and misuse patterns before harm spreads. Compliance plays a central role by ensuring real-time alerts flow to appropriate stakeholders, not solely to engineering teams. Incident response frameworks allow the organization to act quickly, document remedial action, and learn from failures. Regular reporting to senior leadership and the board reinforces accountability and aligns AI behavior with organizational values. Ethical monitoring is the mechanism that keeps AI trustworthy long after deployment.

If compliance does not lead to ethical AI governance, someone else will. It is time for the compliance to step forward.

If you would like a checklist for Embedding Ethics into the AI Lifecycle, leave us a Voice Mail.