Categories
2 Gurus Talk Compliance

2 Gurus Talk Compliance – Episode 60 – The Dispatches Edition

What happens when two top compliance commentators get together? They talk compliance, of course. Join Tom Fox and Kristy Grant-Hart in 2 Gurus Talk Compliance as they discuss the latest compliance issues in this week’s episode!

 Stories this week include:

  • A former Navy No. 2 was sentenced to 6 years for corruption.  (NBC)
  • BCG employees to take Humanitarian Principles training. (FT)
  • DOJ is about to cut loose the Binance monitor. (Bloomberg)
  • Trump calls for the end of quarterly reporting for public compliance.  (NYT)
  • First AI CCO.  (BBC)
  • Dispatches from the SCCE Conference – Radical Compliance
  • Trump and Europe Are at Odds Over How to Sanction Russia – WSJ
  • What Compliance Leaders Need to Know Ahead of Crucial DOJ Data Security Program Deadline – Corporate Compliance Insights
  • The Rush to Return to Office is Stalling – WSJ
  • Florida man clings to back of moving UPS truck to avoid deputies after Lowe’s shoplifting attempt: officials – FOX Orlando 35

Connect with the Hosts:

Prove Your Worth

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Blog

Cybersecurity Oversight at the Boards

Cybersecurity risk is no longer a back-office IT issue. It is a board-level governance priority, a regulatory compliance challenge, and a reputational minefield. From ransomware attacks to regulatory enforcement actions, the stakes have never been higher. In an article in the Harvard Law School Forum on Corporate Governance, titled “Risk Management and the Board of Directors,” the review focused on the NACD’s 2025 survey. It showed that over three-quarters of boards now discuss the material and financial implications of cyber incidents. While that is progress, awareness alone is not enough.

For compliance professionals, the message is unmistakable: cybersecurity oversight is now a central pillar of governance. In this post, I will explore the evolving regulatory landscape, lessons from enforcement actions, and practical steps compliance teams can take to help boards discharge their responsibilities effectively.

A National Priority with Global Reach

Cybersecurity has moved to the top of national agendas. The Biden Administration’s 2023 National Cybersecurity Strategy set the tone, and the Trump Administration’s 2025 Executive Order reinforced it, emphasizing protections against foreign cyber threats and secure technology practices. But this is not just a U.S. issue. The EU’s GDPR, California’s CCPA, Virginia’s CDPA, and Illinois’s biometric data laws all impose sweeping obligations with high-stakes enforcement. Settlements under Illinois’s biometric privacy law alone have reached into the hundreds of millions.

For compliance professionals, this expanding patchwork of regulation means that cyber oversight cannot be siloed by geography or business unit. Boards must ensure management understands and complies with both domestic and international requirements.

The SEC Steps into the Spotlight

If boards needed any reminder of their cyber responsibilities, the SEC has provided it. In 2023, the SEC finalized disclosure rules requiring companies to report material cyber incidents on Form 8-K within four business days (subject to limited delays approved by the Attorney General). Companies must also disclose in their 10-Ks their processes for identifying and managing cyber risks, the material impacts of prior incidents, and, critically, the board’s role in oversight.

The SEC has coupled disclosure mandates with enforcement actions. From Robinhood in 2025 (failure to implement identity theft protections) to SolarWinds in 2023 (alleged fraud and internal control failures), to Blackbaud’s ransomware misrepresentations and Morgan Stanley’s vendor monitoring failures, the Commission is signaling that cyber lapses are securities law violations. The key takeaway for compliance is that disclosures must be accurate, controls must be effective, and boards must demonstrate active oversight. Anything less may well invite regulatory scrutiny.

DOJ, FTC, and State Regulators Join In

The SEC is not alone. The DOJ has used the False Claims Act to address software vulnerabilities sold to government agencies. The FTC has pursued cases against GoDaddy and other providers for failing to implement adequate protections. The New York Department of Financial Services (NYDFS) has enforced its prescriptive cybersecurity rules since 2019, with actions as recent as August 2025. And globally, regulators like Ireland’s Data Protection Commission have issued blockbuster fines, such as the €530 million penalty against TikTok for unlawful data transfers.

The compliance implication is clear: multi-layered enforcement is now the norm. Cybersecurity and data privacy risks span agencies, jurisdictions, and statutes. Boards must assume that regulators will coordinate, cross-reference, and pursue failures aggressively.

Frameworks That Matter

With enforcement risk high, companies need a structured approach. The National Institute of Standards and Technology (NIST) framework has become the de facto benchmark, with its five core functions: identify, protect, detect, respond, and recover. Both the SEC and FTC endorse it, and boards should expect management to benchmark their programs against it.

At the governance level, the NACD’s Director’s Handbook on Cyber-Risk Oversight and guidance from the Cybersecurity & Infrastructure Security Agency (CISA) provide clear expectations: boards should not manage cyber risk, but they must oversee management’s handling of it.

Lessons from Enforcement Actions

Every enforcement case tells a story, and compliance professionals should use these as teaching tools:

  • Vendor Oversight Matters – Morgan Stanley’s Failure to Monitor Vendors Exposed Data from 15 Million Customers.. Boards must ensure that vendor cyber risk is integrated into their oversight.
  • Accurate Disclosures Are Non-Negotiable – SolarWinds and Blackbaud faced allegations of misrepresentation around breaches. Boards must verify that management’s cyber disclosures are truthful and complete.
  • Controls Must Be Tested – Robinhood’s identity theft control failures remind us that having policies on paper is not enough. Boards should require evidence that controls work in practice.

Practical Steps for Compliance Professionals

So how can compliance officers help boards meet their obligations in this complex cyber landscape? Four steps stand out:

1. Educate and Engage the Board

Boards need ongoing, tailored education on cyber risks. Compliance should arrange regular briefings from CISOs, external experts, and regulators. This ensures directors can ask informed questions and challenge management effectively.

2. Strengthen Incident Response Preparedness

An incident response plan is only as strong as its execution. Compliance must test plans through tabletop exercises, ensure disclosure obligations are understood, and coordinate with law enforcement and advisors. Boards should be briefed on lessons learned after every drill or real incident.

3. Integrate Cyber Risk into Enterprise Risk Management

Cyber risk cannot be isolated from strategy, finance, and operations. Compliance should help boards see cyber threats as part of enterprise risk management, aligned with business goals and resilience planning.

4. Monitor Third-Party and Supply Chain Risk

Vendors, cloud providers, and contractors are often the weak link. Compliance should implement due diligence, ongoing monitoring, and contract requirements that address cyber obligations. Boards should receive visibility into these risks and the company’s mitigation strategies.

Why This Matters for Boards and Compliance

Cybersecurity is not just an IT challenge; it is a governance imperative. Regulators, courts, and investors expect boards to demonstrate active, documented oversight. For compliance professionals, the mandate is to help boards meet that expectation with clarity, structure, and evidence.

The reality is stark that a single breach can devastate a company’s reputation, stock price, and stakeholder trust. But boards that embrace active oversight, guided by compliance professionals, can transform cybersecurity from a vulnerability into a competitive advantage.

Final Thoughts

The cyber landscape is evolving faster than most organizations can keep pace. But boards do not have the luxury of waiting. As recent regulations and enforcement actions demonstrate, oversight failures will be punished, sometimes harshly.

For compliance professionals, this is both a challenge and an opportunity. By educating boards, strengthening incident response, integrating cyber into enterprise risk, and addressing third-party exposures, compliance can elevate its role from policy enforcer to strategic partner.

The bottom line: Cybersecurity oversight is no longer optional. It is the frontline of governance, and compliance professionals are the essential guides helping boards navigate it.

Categories
AI Today in 5

AI Today in 5: September 25, 2025, The Red Lines for AI Edition

Welcome to AI Today in 5, the newest edition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI, so start your day, sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5, all from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest related to AI.

Top AI stories include:

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com.

Categories
Blog

Directors and AI: Do’s, Don’ts, and Compliance Lessons

Artificial intelligence (AI) has rapidly become embedded in the daily workflows of executives, employees, and, increasingly, board directors. From drafting strategy summaries to analyzing industry data, directors are turning to AI chatbots and transcription tools in the same way they once adopted email, spreadsheets, or virtual board portals. However, unlike those earlier technologies, AI presents new risks, and for directors, these risks intersect directly with fiduciary duties and corporate governance obligations.

A recent memorandum by Skadden, Arps, Slate, Meagher & Flom LLP, published through the Harvard Law School Forum on Corporate Governance, outlines practical dos and don’ts for directors using AI in their board roles. The message is clear: while AI offers great promise, directors must use it with caution. For compliance professionals, this guidance provides important lessons not only for boardrooms but also for the governance structures that surround them.

The Temptation of AI in the Boardroom

Boards are expected to absorb massive amounts of information, such as financial results, strategy papers, compliance reports, cybersecurity dashboards, and often under tight timelines. It is easy to see why a director might feed these materials into an AI tool to produce summaries or ask for red flags. Similarly, transcription services appear attractive for documenting complex board meetings and discussions. But here lies the trap: not all AI tools are created equal. Publicly available chatbots often train on user inputs, meaning that confidential board information could be incorporated into the system and potentially regurgitated to other users, including competitors.

Just as you would never allow directors to send board books through unsecured email, AI tools need guardrails.

Key Risks Identified in the Director’s Guide

The Skadden memorandum outlines several risks directors must consider when using AI in their corporate capacities:

  1. Confidentiality and Data Leakage – Uploading sensitive materials into public AI systems risks exposing trade secrets or personal data. Even if the information is deleted from a user’s history, the AI vendor may still retain and train on it.
  2. Discovery and Litigation Risks – AI chats are records. Like emails, they may be discoverable in litigation or regulatory reviews. Regulators could demand access to AI interactions if they involve matters under scrutiny, such as antitrust reviews of mergers and acquisitions (M&A) activity.
  3. Loss of Privilege – Using AI to transcribe board meetings or communications with counsel risks waiving attorney-client privilege. Once third parties have access, privilege may be lost forever.
  4. Accuracy and Hallucinations – AI outputs can be wrong, biased, or outdated. Treating AI results as authoritative without verification exposes directors to poor decision-making and potential breaches of fiduciary duties.
  5. Erosion of Human Judgment – Over-reliance on AI to make HR, strategy, or other critical decisions risks abdicating the duty of care and loyalty. Directors must remain firmly “in the loop”.

Compliance Lessons for Professionals

From these risks, we can distill key lessons for compliance officers advising boards and executives on AI governance.

1. Confidential Information Must Stay Inside the Perimeter

Compliance professionals should establish clear rules: no uploading of board materials, personal data, or trade secrets into public AI tools. Instead, direct the board to company-approved platforms that are vetted for security and configured to prevent training on sensitive inputs. This is not just a best practice; it may also be required to comply with contractual obligations, privacy laws, and internal data-protection policies.

2. Treat AI Chats as Discoverable Records

Boards should assume that anything shared with AI may one day be discoverable by others. Compliance professionals must include AI chats and transcripts in records-retention policies and advise directors to avoid discussing sensitive legal or competitive issues in public AI systems. This lesson mirrors earlier corporate missteps with text messages and messaging apps. AI is the new frontier for discoverability.

3. Preserve Privilege by Avoiding AI for Legal Matters

Directors must not use AI to record privileged discussions with counsel or board meetings, as this would violate the attorney-client privilege. Compliance officers should make this an explicit policy. Approved transcription tools may be used for training sessions or customer service calls, but never for board-level deliberations. Losing privilege could cripple a company’s defense in litigation. Compliance officers should hammer this home during board training.

4. Verify Before You Trust

AI has a well-documented tendency to “hallucinate.” Directors must be reminded: AI is not a single source of truth. Compliance programs should emphasize verification. Encourage directors to cross-check AI outputs against trusted sources and ensure management reviews AI-generated analyses before relying on them for decision-making.

5. AI Is a Tool, Not a Decision-Maker

The most important compliance lesson: AI augments but does not replace human judgment. Directors remain bound by duties of care and loyalty. Compliance professionals must make clear that delegating decision-making to AI tools could not only harm the company but also expose directors to personal liability.

Building a Compliance Framework for Board Use of AI

The Skadden guide closes by urging boards to develop clear policies for AI use, including approved tools, acceptable uses, and required disclosures. For compliance officers, this is an opportunity to lead.

Here are key framework elements to consider:

  • Approved Tools List – Maintain a list of AI platforms validated by IT and legal for security and compliance.
  • Acceptable Use Policy – Define when and how directors may use AI (e.g., industry research, summarizing public filings) versus prohibited uses (e.g., uploading board decks, transcribing meetings).
  • Training and Awareness – Provide directors with training on AI risks, including confidentiality, discoverability, and hallucinations.
  • Monitoring and Audit – Periodically review the use of AI by directors to ensure compliance with relevant policies and regulations.
  • Disclosure Requirements – Require directors to disclose if AI tools were used to generate or summarize board-related materials.

Final Thoughts

The “Do’s and Don’ts of Using AI” is a timely reminder: AI governance is not only about company-wide adoption. It also starts at the top, with the board itself. Directors tempted to use AI in their own roles face unique risks. These risks could compromise confidentiality, destroy privilege, or erode fiduciary oversight.

For compliance professionals, this presents an opportunity to serve as both educator and enforcer. Just as compliance led the charge on insider trading policies, conflicts of interest, and anti-bribery training, so too must we lead on AI governance.

The bottom line is that AI can be an extraordinary tool for directors. But without compliance guardrails, it can also be a governance trap. Our role is to ensure the boardroom and the company stay on the right side of that line.

Categories
AI Today in 5

AI Today in 5: September 24, 2025, The AI Literacy Edition

Welcome to AI Today in 5, the newest edition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI, so start your day, sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5, all from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest related to AI.

Top AI stories include:

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com.

Categories
Blog

Building a Compliance Playbook for AI: Board – Level Lessons in Cybersecurity Oversight

Artificial intelligence (AI) has been heralded as one of the most transformative technologies of our time. It promises efficiency, productivity, and entirely new business models. Yet, as with any tool of such power, AI is both a friend and a foe. For corporate directors, compliance officers, and risk professionals, AI presents a dual challenge: leveraging its defensive strengths while preparing for its potential weaponization by malicious actors.

The National Association of Corporate Directors (NACD), in partnership with the Internet Security Alliance (ISA), has released a special supplement to its Directors’ Handbook on Cyber-Risk Oversight devoted entirely to AI in cybersecurity. It is a timely publication. As adoption rates soar, 72% of companies were already using AI in 2024, and the risks are accelerating just as fast. For the compliance community, the report provides a roadmap for oversight, governance, and practical questions boards must ask management.

AI as Both Force Multiplier and Risk Multiplier

On one side of the ledger, AI enhances cybersecurity by automating threat detection, reducing false positives, identifying malware, and analyzing oceans of log data. Used wisely, AI allows companies to “get ahead of theft”. This includes identifying vulnerabilities before criminals exploit them. Generative AI and large language models (LLMs), in particular, can speed detection, enrich threat indicators, and even suggest remediation steps.

However, these same capabilities are available to cybercriminals. AI lowers the barrier of entry for less sophisticated hackers, turbocharges phishing and social engineering campaigns, and allows nation-states to refine cyberattacks at scale. This duality makes AI unique: it amplifies both opportunity and risk simultaneously.

Oversight Imperatives for Boards

The handbook identifies four key imperatives for boards responsible for overseeing AI and cybersecurity.

1. Director of Education – Boards must commit to continuous learning about AI’s risks, benefits, and regulatory developments. Few leaders yet possess the technical grounding needed to appreciate AI’s implications.

2. Threat and Opportunity Awareness – Directors must understand not just the dangers but also the strategic benefits AI can bring.

3. Regulation and Disclosure – Boards must anticipate evolving rules and disclosure obligations. AI oversight will require the same level of rigor as financial and ESG reporting.

4. Board Readiness – Boards must ensure management builds governance structures, ethical use frameworks, and clear communication channels about AI’s role.

Compliance Lessons from the NACD AI in Cybersecurity Handbook

1. Third-Party and Supply Chain Risk Will Intensify

Boards are advised to scrutinize vendors’ AI tools and data sources. As the handbook emphasizes, AI models can be trained on data with questionable provenance, intellectual property, personally identifiable information, or even classified information. Using such models can expose organizations to liability. For compliance professionals, this means conducting enhanced due diligence on third-party AI systems. Ask vendors how they source training data, what models they use, and whether they have human oversight mechanisms in place to ensure quality. AI risk is now a key component of supply chain risk.

2. Transparency Is a Non-Negotiable

AI systems often function as “black boxes.” Their lack of explainability poses reputational and legal risks when decisions cannot be justified. Boards are urged to push for transparency in AI deployment, both internally and in customer-facing applications. For compliance professionals, this means incorporating explainability into your AI governance framework. Require documentation of training data, decision-making logic, and model limitations. If regulators ask, you must be able to demonstrate your homework.

3. Continuous Monitoring Is the New Standard

As highlighted in the AI Seven-Step Governance Program, AI oversight requires more than pre-deployment testing. Continuous monitoring, auditing, and retraining must occur throughout the lifecycle of AI tools to ensure their effective use. For the compliance professional, this means your program must move beyond “check-the-box” vendor certifications. Build ongoing monitoring and assurance processes. Think of AI oversight as dynamic, not static.

4. Regulation Will Come Fast and Furious

The NACD warns that while regulators often lag innovation by three to five years, the window for AI is already shortening. Boards relying on a “wait and see” approach will find themselves overwhelmed when rules arrive. Clearly, the compliance function must do more than wait for the regulators. Even if the US government were inclined to do so, the necessary political will would not exist to allow for an agreement. This means you should align your approach today with emerging frameworks, such as the EU AI Act, the NIST AI Risk Management Framework, and OECD principles. Position your company to demonstrate proactive governance.

5. Disclosure Expectations Will Rise

AI adoption carries disclosure obligations across transparency, risk assessment, and incident reporting. Boards must assume that regulators and investors alike will demand clear, timely disclosure of AI-related incidents and governance practices. Compliance must lead the way in your corporation to build AI into your disclosure controls and procedures now. Ensure incidents involving AI failures are reported with the same rigor as material cybersecurity breaches.

6. The Board Must Get Educated—and Fast

The handbook emphasizes director education. Boards that lack AI fluency will struggle to provide proper oversight. Worse, they may overestimate management’s ability to mitigate AI risks. You should encourage board training through NACD, Carnegie Mellon’s CERT program, or trusted third-party advisors. Education is no longer optional; it may well become a fiduciary duty.

7. Governance Structures Must Evolve

Some companies are considering dedicated AI committees, while others integrate AI oversight into existing audit or risk committees. Either way, boards need clear lines of accountability. The questions boards should be asking management are listed extensively in the handbook, including:

  • How are competitors using AI?
  • Do we need a Chief AI Officer?
  • What is our exposure if adversaries use AI against us?
  • Have we segregated training data to know its provenance?
  • Are our policies aligned with the EU AI Act’s risk classifications?

Start these conversations today. Board agendas must include AI oversight as a recurring topic.

Building a Compliance Playbook for AI

The compliance professional can translate the NACD’s recommendations into a practical playbook for your program, incorporating the following key concepts.

  • Embed AI governance early – Don’t bolt compliance onto AI projects after the fact. Integrate governance into design and procurement stages.
  • Adopt a human-centered AI approach – Ensure AI is aligned with corporate values and ethical principles, not just efficiency goals.
  • Use risk quantification – Treat AI risk like any other enterprise risk: quantify, compare, and integrate into ERM frameworks.
  • Demand accountability – Require clear responsibility for AI oversight, whether it sits with the Chief Compliance Officer, CIO, or a new Chief AI Officer role.
  • Engage regulators early – Use disclosure and transparency as tools to build trust with regulators and stakeholders.

The Handbook makes clear that AI in cybersecurity is not just a technology issue. It is an enterprise risk, a boardroom issue, and a compliance mandate. For compliance professionals, this means you must step into the AI oversight conversation.

As with the FCPA decades ago, regulators and stakeholders will expect companies to transition from a reactive to a proactive approach. The time to build frameworks, train directors, and embed oversight is now. AI, like every disruptive technology before it, will reward the prepared and punish the complacent. Compliance professionals are uniquely positioned to bridge the technical and governance divide. By applying lessons from the NACD handbook, we can ensure that AI becomes not just a tool for criminals but a force multiplier for integrity, trust, and resilience in the digital age.

Categories
Great Women in Compliance

Great Women in Compliance – Compliance as a Product Differentiator with Susan Cooper

In today’s episode, Lisa Fine speaks with Susan Cooper, Vice President of Regulatory Compliance Programs and Global Data Protection Officer at Meta, discussing her approach to compliance in the technology sector.  Susan discusses the path that led her to her current role, which is unique as her team is embedded within Meta’s product organization.

Being part of the product development team allows compliance to work hand-in-hand with product development through their risk review process, which assesses privacy, security, content safety, and financial risks in a centralized process for over 1,400 products per month.  It is part of their processes.

Susan also discusses how Meta utilizes “privacy-aware infrastructure,” embedding compliance requirements into standardized, reusable code components that can be used throughout the organization. She also provides some advice for compliance professionals, particularly those who are interested in technology companies, including:

  • Learn to speak “tech” if you want to work in tech compliance.
  • Get to know your stakeholders and their concerns;
  • Keep a growth mindset – be willing to ask questions and learn constantly; and
  • Embrace AI and automation tools to scale your work and keep learning about these tools
Categories
Upping Your Game

Upping Your Game – Leveraging Behavioral Analytics in Compliance: A Proactive Approach

In February, the Trump Administration suspended investigations under and enforcement of the FCPA. Many compliance professionals have since wondered what this will mean for corporate compliance programs going forward. Hui Chen challenged compliance professionals with the statement, “It’s time to up your game.”

This podcast series, sponsored by Ethico and co-hosted with Ethico co-CEO Nick Gallo, hopes to meet Hui Chen’s challenge. We will discuss how compliance professionals can ‘Up Their Game’ by utilizing currently existing Generative AI (GenAI) tools to enhance their compliance programs significantly. As compliance professionals, it is crucial to recognize that this moment is not merely about incremental improvements but about elevating our profession to an entirely new level of effectiveness, efficiency, and organizational value.

Tom Fox and Nick Gallo explore the role of behavioral analytics in transforming cultural assessments and compliance programs. They discuss how AI and data analytics can help compliance officers transition from a reactive to a proactive approach, thereby enhancing decision-making and promoting positive behavior within organizations. The conversation covers the importance of continuously assessing culture, the challenges of measuring it, and the necessity of thinking in bets—much like a skilled poker player. Tune in to learn how to make smarter, more agile decisions in the compliance realm, and stay ahead of potential issues before they escalate.

Key highlights:

  • Behavioral Analytics in Compliance
  • The Importance of Measuring Culture
  • Evolution of Data Analytics in Compliance
  • Strategies for Gathering Behavioral Data

Resources:

Upping Your Game-How Compliance and Risk Management Move to 2030 and Beyond on Amazon.com

Nick Gallo on LinkedIn

Ethico

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
AI Today in 5

AI Today in 5: September 23, 2025, The $100bn Edition

Welcome to AI Today in 5, the newest edition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI, so start your day, sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5, all from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest related to AI.

Top AI stories include:

  • Nvidia invests $100 billion in OpenAI. (NYT)
  • What is ‘human agency’? (FT)
  • AI investment as the new diplomacy. (Bloomberg)
  • UN wants Red Lines around AI. (NBC News)
  • Compliance in the age of AI. (Forbes)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com.

Categories
AI Today in 5

AI Today in 5: September 22, 2025, The Chaos of Consent Episode

Welcome to AI Today in 5, the newest edition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI, so start your day, sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5, all from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest related to AI.

Top AI stories include:

  • JFrog advances investment compliance. (Simply Wall St)
  • Using AI to navigate consent. (MarTech)
  • Making risk management a competitive advantage. (KPMG)
  • Using AI for cybersecurity. (IBM)
  • The AI race is like the Space Race. (Bloomberg)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com.