Categories
ACI FCPA Conference 2025

ACI-FCPA Conference Speaker Preview Series – Dan Kahn on the New DOJ Enforcement Priorities

In this episode of the ACI-FCPA and Global Anti-Corruption Conference Speaker Podcasts series, Dan Kahn discusses his panel at the event, “Unpacking the DOJ’s New FCPA Enforcement Guidelines and Priorities: Practical Takeaways for Updating Risk Management, Internal Investigations, and Compliance Strategies.”

Some of the issues the panel will discuss are:

  • How does the current DOJ guidance inform compliance?
  • How to recalibrate your compliance program based on the updated Guidance.
  • What does the DOJ FCPA Guidance say about enforcement priorities? 

I hope you can join me at the ACI–FCPA Conference. This year’s event will take place on December 3-4 at the Gaylord National Resort & Convention Center in National Harbor, Maryland, near Washington, D.C. The lineup of this year’s event is simply first-rate, featuring some of the top FCPA professionals, white-collar attorneys, and compliance practitioners in the field.

The 2025 program is being completely redesigned to help your organization stay agile, responsive, and ahead of the curve. Expect a dynamic agenda shaped by real-world priorities, practical takeaways, and the most cutting-edge thinking in compliance—led by a faculty of global practitioners with boots on the ground, encountering the very risks that come across your desk.

Please join me at the event. For information on the event, click here. Listeners of this podcast will receive a discount by using the code D10-999-CPN26.

Categories
Compliance Tip of the Day

Compliance Tip of the Day – Business Rationale in the 3rd Party Risk Management Process

Welcome to “Compliance Tip of the Day,” the podcast that brings you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements. Whether you’re a seasoned compliance professional or just starting your journey, our goal is to provide you with bite-sized, actionable tips to help you stay ahead in your compliance efforts. Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law. Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

This week, we are reviewing the third-party risk management process. Today, we take up the Business Rationale.

For more information on this topic, refer to The Compliance Handbook: A Guide to Operationalizing Your Compliance Program, 6th edition, recently released by LexisNexis. It is available here.

Categories
AI Today in 5

AI Today in 5: November 18, 2025, The Project Prometheus Edition

Welcome to AI Today in 5, the newest edition of the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Transparency and AI compliance. (FinTechGlobal)
  2. AI can deliver smarter, safer reg compliance. (FinTechGlobal)
  3. Should you keep AI away from teachers? (WSJ)
  4. Bezos joins the AI crowd with Project Prometheus. (NYT)
  5. AI can’t do therapy, but can it help therapists? (USNews)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com

Categories
Daily Compliance News

Daily Compliance News: November 18, 2025, The UBS to America Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Canaccord is close to a settlement for compliance lapses. (Bloomberg)
  • A resigned Fed Official violated trading rules. (NYT)
  • Corruption in the CZ pardon. (Newsweek)
  • Will UBS relocate to America? (FT)

The Daily Compliance News has been honored as the No. 2 in the Best Regulatory Compliance Podcasts category.

Categories
Innovation in Compliance

Innovation in Compliance – Navigating the Future of Supply Chain Compliance with Travis Miller

Innovation is present in many areas, and compliance professionals must not only be prepared for it but also actively embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, host Tom welcomes Travis Miller, Chief Strategy Officer and General Counsel at Source Intelligence, to discuss major developments in supply chain compliance.

Miller outlines his recent job transition from Google, where he was the Head of Supply Chain Compliance and Social Responsibility. He delves into the complexities and innovations of Source Intelligence, a company focused on supply chain transparency and compliance. He also talks about his book ‘Guide to Supply Chain Compliance Laws and Regulations’ and highlights the growing significance of supply chain mapping due to new regulations. The conversation examines the pivotal roles of data accuracy, supplier collaboration, and AI in enhancing supply chain compliance. Miller predicts a more technical and relationship-driven future for supply chain professionals, stressing the importance of strategic partnerships. The discussion also explores four market realities that companies can’t ignore, emphasizing the pitfalls of outdated metrics and manual processes. Finally, Travis shares his insights on balancing automation with human judgment to optimize compliance operations.

Key highlights:

  • The Importance of Supply Chain Compliance
  • Supply Chain Mapping and Regulations
  • Full Material Declarations and Their Significance
  • AI in Supply Chain Compliance
  • The Future Role of Supply Chain Professionals
  • The Compliance Playbook and Market Realities

Resources:

Travis Miller on LinkedIn

‘Guide to Supply Chain Compliance Laws and Regulations

The Compliance Playbook is Broken on LinkedIn

Innovation in Compliance was recently honored as the number 4 podcast in Risk Management by 1,000,000 Podcasts.

Categories
Word of the Week

Word of the Week with Kenneth O’Neal – Embracing Failure: A Pathway to Success

Each week, Kenneth O’Neal discusses a word that describes a principle or value of the Qualities of Success. We suggest you use the Word of the Week in your thoughts, deeds, and actions. You might currently possess the quality and desire to develop it further.  You could replace a bad habit with a good habit. Write an action step and use it daily to develop your quality of life. In this episode, Kenneth discusses the word ‘failure.’

Kenneth O’Neal focuses on the positive aspects of failure and its role in personal and professional growth. Some of his key points include defining failure as feedback and a stepping stone to wisdom, emphasizing reflection and adaptation, and offering real-life examples of successful individuals such as Thomas Edison, Abraham Lincoln, Michael Jordan, and Walt Disney, who turned failures into great achievements. The conversation encourages seeing failure as an opportunity for learning and innovation, essential for developing resilience and perseverance, and emphasizes that past failures shape but do not define our future.

Highlights:

  • Word of the Week: Failure
  • Positive Aspects of Failure
  • Historical Examples of Failure Leading to Success
  • Encouragement and Final Thoughts 

Resources:

KRONEAL Consulting

Categories
Blog

Embedding Ethics in the AI Lifecycle

Embedding ethics into the AI lifecycle is not an abstract exercise. It is a practical, repeatable discipline that mirrors the work of corporate compliance. It requires a structured approach, clear accountability, and documented evidence of good governance. Most importantly, it requires compliance professionals to be at the table from the very beginning. Ethics in AI cannot be retrofitted at the end of a development cycle. It must be built in from step one.

Today, I want to examine the ethical checkpoints at each stage of the AI lifecycle and highlight where corporate compliance functions must lead. The goal is to help you build a stronger, more resilient program while demonstrating to regulators and stakeholders that your AI governance is real and operational.

Ethics in Data Sourcing

All ethical AI begins with ethical data. You cannot build a responsible model on a flawed or contaminated foundation. Data sourcing is the earliest point at which compliance becomes critical. First, ensure that the lawful basis, ownership, and rights of use are fully documented for every dataset. This is both an ethical issue and a regulatory one. Next, require a structured review for Personal Identifiable Information (PII) and Protected Health Information (PHI). If the dataset contains sensitive personal information, ensure minimization and purpose limitation principles are applied.

Ethical review also requires looking beyond legality. You must ask a deeper question: does this data reflect the populations on whom the model will act? If certain groups are underrepresented or misrepresented, there is a direct ethical and operational risk. This is where compliance can partner with data teams to conduct bias hotspot reviews and remediation before training begins.

Ethics in Model Training

Once data enters the training process, the focus shifts to how the model is built. Ethical model training emphasizes transparency, reproducibility, and clear accountability. For compliance professionals, this is a familiar structure. At the beginning of training, require a Model Card version zero. This document describes the model’s intended purpose, its users, and its limitations. Think of it as the model’s job description and risk profile. Without this baseline documentation, the organization has no ethical framework for evaluating the model later.

Compliance should also ensure the organization maintains a training bill of materials. Regulators and external auditors will expect clarity on what data, tools, seeds, configurations, and models fed the system. Ethical governance means that if something goes wrong, the organization can retrace its steps and identify the source. Finally, ensure that risks identified during training are assigned owners in the risk register. Ethical accountability requires clear signatures, not vague acknowledgment.

Ethics in Validation and Testing

No model should progress to deployment without a rigorous validation and ethical safety review. At this stage, you are no longer asking whether the model works. You are asking whether the model works in a way that is fair, safe, compliant, and aligned with corporate values. Compliance professionals should insist on structured red teaming for safety, privacy leakage, and discriminatory outputs. Ethical governance requires testing for misuse and unintended consequences, not simply functional performance.

Equally important is the articulation of pass/fail thresholds aligned with the organization’s risk tolerance. If a model shows drift toward unethical outcomes during testing, the organization must be prepared to pause or rework it. Ethics without enforcement is merely a suggestion. Legal review is also essential at this stage. Intellectual property rights, export controls, sector regulations, and customer contract obligations must all be considered. The organization’s ethical responsibility extends to ensuring its models do not inadvertently violate the law or expose users to regulatory scrutiny.

Ethics in Deployment

Deployment is the point at which AI moves from the laboratory to real-world use. Ethical deployment requires safeguards that prevent inappropriate access, misuse, confusion, or misinterpretation. Role-based and environment-based access controls are essential. No one should have access to modify or use a model unless there is a documented business justification. Ethical governance also requires that user disclosures clearly explain the model’s capabilities, limitations, and data use practices. Users should never be misled into believing a system can do something it cannot.

Canary rollouts and automated rollback mechanisms are additional ethical guardrails. They allow organizations to detect unintended consequences early and reverse course before harm spreads widely. Compliance should also ensure that third-party vendors and service providers follow equivalent ethical and governance controls. You are ultimately responsible for the ethical risks you outsource.

Ethics in Monitoring

Ethical oversight does not end when a model goes live. Ongoing monitoring is essential. Models drift. Data shifts. User populations change. A model that was ethical yesterday can become problematic tomorrow. Ethical monitoring means tracking for bias, accuracy degradation, safety issues, and misuse. It also implies routing alerts not only to engineering, but directly to compliance and risk. Ethics is not solely a technical matter. It is a governance responsibility.

Incident response is another ethical requirement. Organizations must maintain a defined, repeatable process for identifying, containing, and resolving AI-related harms. If something goes wrong, you must be prepared to act quickly and transparently.

Ethics in Governance

Finally, ethics must be embedded in the organization’s AI governance structure. Ethical AI cannot depend solely on goodwill or ad hoc decision-making. Clear role definitions, evidence documentation, and leadership engagement must support it. A formal Responsible, Accountable, Consulted, and Informed (RACI) structure for each lifecycle stage ensures accountability. Board-level reporting ensures visibility. Annual independent audits ensure credibility. Ethical AI requires not only doing the right thing but also demonstrating it.

As with all compliance disciplines, documentation is your first line of defense. Maintain Model Cards, testing evidence, monitoring logs, and decision memos. Ethical governance cannot be proven without records. The work is ongoing and iterative. Ethical AI is not a destination. It is a continuous commitment woven into every operational step. Compliance professionals are uniquely suited to lead this work because we understand systems, controls, and organizational behavior. Ethical AI is compliance by another name.

Five Key Takeaways for the Compliance Professional

1. Ethical AI begins with ethical data. Ethical governance always starts with the quality, origin, and integrity of the data used to train and inform an AI system. Inaccurate, incomplete, unlawfully sourced, or unrepresentative data introduces bias and distortion before a single line of code is written. Compliance professionals must ensure that lawful bases, consent, ownership, and use rights are fully documented, and that sensitive information is minimized and properly protected. Ethical data sourcing also requires evaluating demographic representation and identifying potential bias hotspots early. When data is handled ethically, the entire lifecycle is strengthened, reducing long-term operational, regulatory, and reputational risks.

2. Documentation is an ethical control. Good documentation is not busywork. It is the backbone of ethical AI and a critical indicator of organizational seriousness. Model Cards provide transparency regarding purpose, intended users, limitations, and performance boundaries. Risk registers assign ownership and ensure accountability throughout development, deployment, and monitoring. Audit trails create the evidentiary record that regulators and external stakeholders expect when evaluating whether decisions were responsible, compliant, and well-governed. Without documentation, an organization cannot show that it understood the risks of a model or acted responsibly in response to them. Ethical AI requires a traceable, repeatable set of records that tells a clear story of control and oversight.

3. Ethical validation requires testing. Validation is often treated as a technical gate, but ethical AI requires a far broader examination of how a model behaves under real-world stress. Compliance teams must ensure models are exposed to adversarial testing, red-team challenges, privacy leak assessments, and discrimination checks. A model that performs with high accuracy in ideal conditions may fail ethically when confronted with edge cases or bad actors. Ethical validation demands looking not only at what the model is designed to do, but at what it might inadvertently do. Only by testing for harm, misuse, and unanticipated outcomes can organizations prevent downstream risks and protect users.

4. Deployment must include safeguards. Ethical deployment is the bridge between controlled development environments and unpredictable real-world use. Safeguards such as role-based access controls, environment segregation, and capability restrictions ensure the model is used appropriately. User disclosures prevent misunderstanding by making limitations, risks, and data practices clear. Deployment controls must also account for third parties. If a vendor, integrator, or partner interacts with the model, they must uphold equivalent governance standards. Ethical responsibility does not end at the organizational boundary. Compliance oversees these safeguards to ensure that the model behaves as expected, users are not misled, and vulnerabilities are not introduced through poor operational controls.

5. Ethical monitoring is continuous. Ethics in AI is not solved at launch. Models evolve as data, user behavior, and external conditions shift. Continuous monitoring detects drift, reintroduction of bias, system degradation, and misuse patterns before harm spreads. Compliance plays a central role by ensuring real-time alerts flow to appropriate stakeholders, not solely to engineering teams. Incident response frameworks allow the organization to act quickly, document remedial action, and learn from failures. Regular reporting to senior leadership and the board reinforces accountability and aligns AI behavior with organizational values. Ethical monitoring is the mechanism that keeps AI trustworthy long after deployment.

If compliance does not lead to ethical AI governance, someone else will. It is time for the compliance to step forward.

If you would like a checklist for Embedding Ethics into the AI Lifecycle, leave us a Voice Mail.