Categories
AI Today in 5

AI Today in 5: January 13, 2026, The Ethical AI in Africa Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Ethical AI in Africa. (TechinAfrica)
  2. Who should regulate Healthcare AI? (Harvard Gazette)
  3. Compliance AI matters more than Hype AI. (FinTech Global)
  4. GRC and GenAI. (FinTech Global)
  5. Key issues for compliance in Trump’s AI Order. (National Law Review)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Blog

Ethical AI Is Built in Procurement, Not Posters

In the ongoing conversation about AI, companies are increasingly highlighting their ethical principles. They publish responsible AI statements, share aspirational values, and post impressive slide decks. However, any experienced compliance professional knows that ethics does not live in posters. It lives in systems. It lives in contracts. It lives in the infrastructure choices that decide who holds power, who can be audited, and who is accountable when things go wrong.

When you pull back the curtain on most modern AI deployments, you find a hard truth. Ethical outcomes depend less on high-level values and more on the mundane details of compute access, data governance, vendor resilience, and transparency. Those details are not glamorous, but they are decisive. They are also exactly where the compliance function must lead. The companies that treat AI as a technical problem will struggle. The companies that understand AI as a governance problem will succeed. Compliance should be at the center of that governance effort.

The Infrastructure Beneath Ethical AI

The most important element of ethical AI is the part no one sees. The infrastructure decisions made today are the ethical outcomes of tomorrow. Consider four core factors that determine the integrity of an AI system long before it begins making predictions.

a. Compute Access

The amount of compute you grant, the regions in which it can be used, and the failover plan for outages are not IT decisions. They are about fairness, safety, and continuity. If only certain business units have access to the most powerful models, you have created inequities inside your own walls. If you cannot maintain operations during a provider outage, you have made a resilience gap that regulators will notice.

b. Data Governance

AI systems amplify the quality and cleanliness of your data practices. Data lineage, retention schedules, classification levels, and access controls determine who can see what, when, and under what safeguards. If the data is flawed, every model output built on it is flawed. Compliance already governs data privacy, confidentiality, and use restrictions. AI raises the stakes.

c. Vendor Resilience

The more an organization invests in a single AI provider, the more dependent it becomes on that provider’s risk posture. Multi-cloud strategies, vendor exit rights, and enforceable SLAs are not operational niceties. They are governance tools to prevent concentration risk. Compliance has long experience managing third-party risk; AI vendors are simply the newest category.

d. Model Operations

Model versioning, approval workflows, rollback procedures, and audit trails determine how quickly an organization can detect harm and correct it. These operational controls map almost perfectly onto compliance best practices. They reflect the same principles that underpin any effective risk management program: evidence, traceability, and documented decision-making.

Where Compliance Must Lead

Most organizations underestimate the extent to which AI governance requires the same discipline found in mature compliance programs. The compliance function knows how to operationalize policies, create audit trails, and embed accountability. These strengths translate directly into AI. Below are the areas where compliance should play the lead role.

1. Embedding Ethical Standards Into Procurement

Ethical AI begins with ethical procurement. RFPs should require model documentation, bias testing, data ownership guarantees, audit logs, content filtering, and evidence of secure development practices. A vendor that cannot demonstrate its internal controls will not protect your ethical commitments. Compliance is uniquely positioned to identify those red flags.

2. Contracting for Power, Not Promises

Every compliance professional knows that a vendor promise without contractual force is aspiration, not assurance. AI contracts must include termination for harm, financially meaningful remedies, data portability, and clear assignment of responsibilities. Regulators will expect companies to demonstrate that they negotiated governance into their agreements.

3. Designing for Resilience

AI systems break in unfamiliar and sometimes spectacular ways. Multi-region deployment, validated failover paths, and regular stress testing are mandatory. Resilience is an ethical value because it protects customers, employees, and stakeholders from foreseeable harm. Compliance should insist on documented resilience planning as part of deployment approval.

4. Governing the Data Layer

Data minimization, differential access, immutable lineage, and standard retention schedules must be embedded across AI use cases. AI does not excuse a company from its privacy or data-governance obligations. It heightens them. Compliance should ensure that every AI initiative begins with a data governance review before a single line of code is written.

5. Operationalizing Oversight

AI oversight is not a once-a-year assessment. It is a living discipline. Compliance should push for model risk reviews, red-team exercises, change-control approvals, and clearly defined escalation pathways. When issues arise, there must be a time-boxed rollback plan in place. Clearly assigned control owners must be accountable for results.

6. Measuring What Matters

Without metrics, oversight is performance art. Companies should measure false positives and false negatives for each AI use case, especially across protected classes. They should track incident rates, drift detection outcomes, model approval times, and vendor SLA performance. These indicators form a dashboard that demonstrates whether AI governance is real or merely decorative.

7. Funding Ethics as an Operational Requirement

Ethical AI is not free. It requires a budget for monitoring, red teaming, data curation, and external verification. Compliance should push for these resources and make the case that ethics is a form of operational continuity. A company that cannot demonstrate that it has funded its governance model will struggle in any regulatory examination.

8. Building Exit Capability

Most companies underestimate how difficult it is to transition away from an AI vendor. Compliance should require that every material AI system have an exit plan that includes timelines, data-migration standards, and a documented process to ensure continuity. Only an exit tested under realistic conditions qualifies as a real control.

9. Clarifying Accountability

AI governance fails when accountability is diffuse. Every operational risk must have an owner. Compliance should map each AI risk to a responsible executive and require quarterly reviews. Regulators do not want to know who wrote the policy. They want to know who owns the risk.

10. Training the Front Line

AI governance is not the exclusive domain of data scientists. Product teams, procurement staff, and engineers must understand their responsibilities. Compliance should provide scenario-based training and reward early escalation. Culture determines how quickly issues surface, and AI issues must surface fast.

Closing Thoughts

Ethical AI is not an aspirational project. It is a systems problem, a contracting problem, a data problem, and an accountability problem. Compliance has the experience and discipline to lead the organization through these challenges. When procurement, contracts, and architecture embody the company’s values, ethical outcomes follow. When they do not, no principle statement on a website will save you.

Categories
Compliance Week Conference Podcast

Compliance Week 2024 Speaker Preview Podcasts – Nakis Urfi on Ethical AI in Compliance

In this episode of the Compliance Week 2024 Preview Podcasts series, Nakis Urfi discusses his workshop at Compliance Week 2024, “Responsible and Ethical AI.”. Some of the issues he will discuss in this podcast and his presentation are:

  • Building an AI program at your organization;
  • What are the relevant regulations and movements globally around ethical AI and
  • Why the Compliance Week conference stands out in the Compliance Arena.

I hope you can join me at Compliance Week 2024. This year’s event will be held April 2-4 at The Westin Washington, DC, Downtown. The line-up for this year’s event is first-rate, with some of the top ethics and compliance practitioners around.

Gain insights and make connections at the industry’s premier cross-industry national compliance event, offering knowledge-packed, accredited sessions and take-home advice from the most influential leaders in the compliance community. Back for its 19th year, join 500+ compliance, ethics, legal, and audit professionals who gather to benchmark best practices and gain the latest tactics and strategies to enhance their compliance programs. Compliance, ethics, legal, and audit professionals will gather safely face-to-face to benchmark best practices and gain the latest tactics and strategies to enhance their compliance programs, among many others, to:

  • Network with your peers, including C-suite executives, legal professionals, HR leaders, and ethics and compliance visionaries.
  • Hear from 80+ respected cross-industry practitioners who are CEOs, CCOs, regulators, federal officials, and practitioners to help inform and shape the strategic direction of your enterprise risk management program.
  • Hear directly from panels on leadership, fraud detection, confronting regulatory change, abiding by cross-border rules and regulations, and the always favorite fireside chats.
  • Bring actionable takeaways to your program from various session types, including cyber, AI, Compliance, Board obligations, data-driven compliance, and many others, for you to listen, learn, and share.
  • Compliance Week aims to arm you with information, strategy, and tactics to transform your organization and career by connecting ethics to business performance through process augmentation and data visualization.

I hope you can join me at the event. For information on the event, click here. As an extra benefit to listeners of this podcast, Compliance Week is offering a $200 discount on the registration price. Enter the discount code TFOX2024 for $200 off.

The Compliance Week 2024 Preview Podcast series is a production of the Compliance Podcast Network. Compliance Week is the sponsor of this series.

Categories
Compliance and AI

Mastering ChatGPT: Part 2 – ChatGPT and Ethical AI

Welcome to a special five-part podcast series on mastering ChatGPT. My special guest throughout this journey is Larry Roberts, an accomplished professional with over 25 years of multifaceted experience. Having initiated his career in the corporate training sphere, he exhibited a remarkable shift to IT, contributing greatly as a Business Intelligence Analyst. His proficiency lies in harnessing predictive analytics for inventory and sales projections, which led him to tap into the realm of AI. In 2021, Larry chose to cozy up with podcasting and content creation. His tryst with ChatGPT began in November of the same year, and he has been fully engrossed with it since then. His insights into data models, large language models, and his overall passion for AI are certain to illuminate any forum.

In this Episode 2, we look at the ethical considerations of AI models such as ChatGPT.

In the age of AI, the ethical consequences of this transformative technology present pressing concerns for developers and industry professionals alike. In this episode 2, Tom and Larry shed light on the myriad ethical issues surrounding AI, from securing data privacy and GDPR compliance to mitigating the misuse of AI tools and addressing job displacement. There is a wealth of information and best practices to guide your ethical approach to AI, ensuring transparency, user control, and adaptability in a rapidly evolving landscape. Embark on this journey with us to ensure that the power of AI is harnessed responsibly, respecting every stakeholder’s rights and privacy.

In this episode, you will be able to:

  • Discover the crucial ethical questions surrounding AI and ChatGPT.
  • Uncover hidden truths about data privacy concerns and your control options.
  • Explore the significant role of GDPR and the collective effort required for privacy.
  • Understand how to combat the misuse of AI instruments through user collaboration.
  • Learn about AI ethics and why transparency, bias evaluation, and human supervision are paramount.

Key Highlights:    

  • Data Privacy
  • AI and Disinformation
  • Human in the Loop

Resources:

Larry Roberts

Larry Roberts on LinkedIn

Red Hat Media

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn