Categories
Daily Compliance News

Daily Compliance News: August 19, 2025, The AI Discontent Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Panamanian Intermediary pleads guilty to bribery and corruption. (Enmayuscula)
  • The winter of our AI discontent. (Bloomberg)
  • Understanding corruption. (Investopedia)
  • When good enough is good enough. (FT)

You can donate to flood relief for victims of the Kerr County flooding by going to the Hill Country Flood Relief here.

Categories
Compliance Tip of the Day

Compliance Tip of the Day – AI Assistant for Compliance

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements. Whether you’re a seasoned compliance professional or just starting your journey, we aim to provide you with bite-sized, actionable tips to help you stay on top of your compliance game. Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law. Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

Today, we continue our 5-part series on using compliance in a best practices compliance program by considering how a compliance professional can use AI as an Assistant.

For more on this topic, check out The Compliance Handbook, a Guide to Operationalizing your Compliance Program, 6th edition, which LexisNexis recently released. It is available here.

Categories
AI Today in 5

AI Today in 5: August 19, 2025, The AI and Compliance Episode

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

  • Texas AG goes after chatbots for kids’ mental health services. (KVUE)
  • China is turning to AI in information warfare. (NYT)
  • Does using AI put you on the wrong side of compliance? (UC Today)
  • Using AI for cross-border trade. (World Business Outlook)
  • Greenlight sues Compliance AI over trademark violation. (Bloomberg)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com.

Categories
Word of the Week

Word of the Week with Kenneth O’Neal – Discussing Character: Importance, Decline, and Impact

Each week, Kenneth O’Neal discusses a word that describes a principle or value of the Qualities of Success. We suggest you use the Word of the Week in your thoughts, deeds, and actions. You may currently possess the qualities and desire to develop them to a higher level. You could replace a bad habit with a good habit. Write an action step and use it daily to produce the quality in your life. In this episode, Kenneth O’Neal and Rick Phipps discuss the word – Character.

Kenneth and Rick discuss the concept of character and its significance in personal and professional life, exploring the decline in character and moral standards over the past 25 years. They cite examples like Watergate and Enron to illustrate their points. The discussion includes the origins of the word’ character,’ the positive effects of strong character, and the challenges faced in maintaining it in modern society. They emphasize the importance of integrity, consistent values, and effective communication, drawing on historical figures like George Washington to exemplify strong character. The episode concludes with a call to action for listeners to reflect on their character and strive to do the right thing.

Key highlights:

  • Weekly Word: Character
  • The Importance of Strong Character
  • Decline in Character and Cultural Values
  • Historical Examples of Character

Resources:

KRONEAL Consulting

Categories
Innovation in Compliance

Innovation in Compliance – Gaurav Kapoor on Risk Management and the Role of AI in GRC

Innovation comes in many areas, and compliance professionals need to be ready for it and embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, Tom Fox interviews Gaurav Kapoor, Vice Chairman, Co-Founder and Board Member of MetricStream, discussing his extensive professional background, from co-founding MetricStream to his current focus on customer intimacy amid AI market disruptions.

Kapoor delves into the evolving landscape of risk management, emphasizing the importance of midyear reviews and integration of various risk themes like operational risk, audit compliance, and cybersecurity. He elaborates on the role of AI in GRC, stating how generative and agent AI can streamline compliance processes and enhance risk management strategies. The conversation also touches on the increasing significance of cybersecurity, geopolitical instability, and climate impact on risk assessment. Kapoor highlights the shift from compliance to a more resilient and risk-aware culture within organizations.

Key highlights:

  • The Importance of July in Risk Management
  • AI’s Role in GRC
  • Emerging Risks and AI Applications
  • Counseling Boards on Risk Management
  • Top Concerns for the Second Half of 2025
  • Evolving Role of Compliance and Risk Officers

Resources:

MetricStream Website and on LinkedIn

Gaurav Kapoor on LinkedIn

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Blog

Building Your Own AI Assistant: Compliance Lessons in Customization

Ed. Note: This week, we present a week-long series on the use of GenAI in a best practices compliance program. Additionally, for each blog post, I have created a one-page checklist for each article that you can use in presentations or for easier reference. Email my EA Jaja at jaja@compliancepodcastnetwork.net for a complimentary copy.

In the ever-changing world of compliance, resource constraints remain one of our biggest hurdles. Whether you’re drafting policies, conducting risk assessments, or preparing investigation summaries, the work is often repetitive, labor-intensive, and subject to tight deadlines. Enter the AI assistant, not as a futuristic dream, but as a practical, buildable tool available to compliance professionals right now.

Alexandra Samuel’s article in Harvard Business Review titled How to Build Your Own AI Assistant, makes one point crystal clear: if you can describe a project in plain English, you can build your own AI assistant. And for compliance professionals, this represents a transformative opportunity to reduce administrative burdens while increasing consistency, accuracy, and adaptability.

But building your compliance AI assistant isn’t about chasing efficiency alone—it’s about making intentional design choices that reinforce compliance objectives, protect corporate culture, and ensure regulatory defensibility. Today, we consider five key takeaways for compliance professionals, each showing how you can harness AI assistants to enhance, not replace, your compliance program.

1. Start with the Right Use Cases

Before building, compliance leaders must ask: What problems do we want AI to solve? Samuel notes that AI assistants excel in four domains: writing and communications, troubleshooting, project management, and strategic coaching. For compliance, this translates into use cases like:

  • Drafting first-pass policy updates aligned with global regulations.
  • Summarizing enforcement actions for Board reporting.
  • Automating responses to routine employee compliance questions (e.g., “Can I accept this client gift?”).
  • Tracking investigation timelines and automatically extracting action items from meeting transcripts.

Choosing the right use case ensures your AI assistant is a force multiplier rather than a shiny distraction. Importantly, you want to start with low-risk, high-volume tasks. Drafting an anti-corruption annual training memo? AI can handle the boilerplate. Deciding whether to disclose a potential FCPA violation to the DOJ? That still belongs squarely in the human domain.

The real lesson here: compliance officers should not let “AI hype” dictate priorities. Instead, define pain points within your compliance workflow and build assistants targeted at those specific, recurring problems. Start small, iterate, and scale responsibly.

2. Design Clear Instructions—Your Assistant Is Only as Good as Its Guidance

According to Samuel, the “heart” of a custom AI assistant is the set of instructions you provide. For compliance teams, this is where risk and opportunity intersect. If your assistant doesn’t know who it is, what standards to apply, and what tone to use, it will produce outputs that undermine your credibility.

Think of instructions as your assistant’s Code of Conduct. Instead of saying “you are a compliance assistant,” you can be more precise:

  • “You are a corporate compliance officer drafting policies for a multinational company. You must ensure all content aligns with DOJ guidance on effective compliance programs, uses a professional but approachable tone, and provides practical examples for employees.”

These custom instructions allow you to “bake in” compliance frameworks from day one. For example, you can require the assistant to reference the COSO Framework for Internal Controls, ISO 37001, or the DOJ’s Evaluation of Corporate Compliance Programs whenever relevant.

The key compliance insight: good AI assistants reflect great compliance design. Just as vague compliance policies create ambiguity, vague AI instructions create unreliable outputs. Invest time in precise persona-building for your assistant, and you’ll reap consistent, defensible results.

3. Feed It Knowledge—Without Losing Control of Sensitive Data

Samuel emphasizes that AI assistants become truly powerful when equipped with background documents, such as policies, reports, contracts, or training decks. For compliance, this is both a gold mine and a minefield.

On one hand, uploading prior investigation reports, risk assessments, or compliance training modules allows your assistant to generate outputs that reflect your company’s real history and regulatory environment. Imagine an assistant that can instantly pull together a cross-border risk assessment using your own prior filings and internal guidance.

On the other hand, compliance officers must stay vigilant about data protection, privilege, and confidentiality. Sensitive HR records, whistleblower reports, and privileged investigation materials should never be indiscriminately fed into a platform without proper safeguards.

Here lies the balancing act: compliance teams must create AI assistants that are well-informed but tightly governed. This may involve anonymizing data, working through secure enterprise-grade AI platforms, or restricting inputs to public and non-sensitive internal documents.

The compliance lesson is simple but non-negotiable: context matters, but confidentiality reigns supreme. Building a compliance AI assistant means establishing protocols for what can and cannot be shared.

4. Iterate Constantly—Think Like a Compliance Monitor

Just as compliance programs require continuous improvement, so too do AI assistants. Samuel makes it clear that assistants won’t be perfect out of the box. They require ongoing feedback, refinement, and adjustment.

For compliance professionals, this is second nature. We already think in terms of monitoring, auditing, and revising. Apply the same discipline to your AI assistant:

  • Audit its outputs for accuracy, tone, and regulatory defensibility.
  • Track where it consistently underperforms (e.g., misinterpreting data privacy rules) and feed corrective instructions.
  • Periodically, “refresh” its context files to reflect updated regulations, new enforcement actions, or changes in corporate policy.

Samuel suggests asking your assistant to write their own revised instructions based on your feedback. That’s a compliance monitoring exercise in itself—your assistant becomes both subject and participant in continuous improvement.

The compliance takeaway: treat your AI assistant as a dynamic system, not a static tool. Just as DOJ expects ongoing risk assessments and remediation, regulators will expect that AI tools in compliance are actively managed, not blindly trusted.

5. Embed Ethical Guardrails and Accountability

The most important compliance lesson in building your own AI assistant is ensuring accountability. As Samuel warns, assistants can hallucinate or produce flawed outputs. In compliance, this is not simply an annoyance; more importantly, it is a potential liability.

That means your assistant must operate under ethical guardrails:

  • Always include a human-in-the-loop review before any AI-generated compliance document is finalized.
  • Require disclosures when AI was used in drafting policies, reports, or training.
  • Train employees not to treat AI outputs as gospel but as drafts for critical evaluation.
  • Align your assistant’s objectives with compliance KPIs, accuracy, transparency, and defensibility, rather than raw speed.

This mirrors the DOJ’s emphasis on corporate accountability. An AI assistant may help draft your gifts and entertainment policy, but it cannot stand before prosecutors and defend your compliance program. That responsibility remains squarely with leadership.

The compliance lesson here is unmistakable: AI is a tool, not a scapegoat. Build it to augment compliance decision-making, not to absolve it.

From Experiment to Integration

Building your own AI assistant is not a technical challenge. It is a compliance design challenge. As Alexandra Samuel reminds us, if you can describe your project, you can build your assistant. For compliance officers, that means thinking intentionally about use cases, precision in instructions, safeguards for sensitive data, iteration, and ethical guardrails.

The opportunity is immense. With thoughtfully designed AI assistants, compliance professionals can shift their focus from repetitive drafting to higher-order strategy, from administrative overload to proactive risk management. But the responsibility is equally immense. An AI assistant reflects the design choices of its creators, choices that must always prioritize compliance culture, accountability, and trust.