Categories
AI Today in 5

AI Today in 5: January 23, 2026, The Greatest AI Challenge Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  • South Korea adds new AI regulations. (Reuters)
  • Vietnam updates IP & AI law. (Rouse)
  • AI’s greatest challenge is managerial, not technical. (Bloomberg)
  • With AI, compliance data is more valuable than ever. (FinTechGlobal)
  • AI assists retailers in stopping return fraud. (CBS News)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Innovation in Compliance

Innovation in Compliance: Transforming from Hierarchy to High Performance: Governance and AI in 2026

Innovation occurs across many areas, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode,  host Tom Fox welcomes guests Bill Sanders, Olivia Storelli, and Andrew Stevens to explore the theme ‘From Hierarchy to High Performance’ in the context of AI and corporate governance.

They take a deep dive into the critical role of AI governance, highlighting its importance for accountability and competitive advantage, and stress the need for decentralized, automated governance to ensure fair and unbiased outcomes. The discussion also covers the interplay between leadership, accountability, and culture in achieving AI success, and outlines the three primary functions of AI: customer relationships, operations, and business models. The episode emphasizes the need for execution over ambition for AI value creation and addresses how legal and compliance professionals can keep pace with the rapidly changing business environment through AI.

Key highlights:

  • The Importance of AI Governance
  • Distributed Governance and Compliance
  • AI’s Impact on Business Models and Operations
  • Decentralization and High Performance

Resources:

Download the AI Executive Whitepaper:

Text the word PLAYBOOK to 415.960.1161. 

or

Visit https://whitepaper.download/

  • Websites

https://roeblingstrauss.com/

https://www.sakurasky.com/

• LinkedIn 

LinkedIn: Bill Sanders

LinkedIn: Olivia Storelli

LinkedIn: Andrew Stevens

Books:

Innovation in Compliance was recently ranked 4th among Risk Management podcasts by 1,000,000 Podcasts.

Categories
Compliance and AI

Compliance and AI – Transforming Cloud Investments: The Role of AI Governance

What is the intersection of AI and compliance? What about Machine Learning? Are you using ChatGPT? These questions are just three of the many we will explore in this cutting-edge podcast series, Compliance and AI, hosted by Tom Fox, the award-winning Voice of Compliance. Today, Tom looks at AI and governance with 3 guests, Bill Sanders, Olivia Storelli, and Andrew Stevens.

Bill Sanders, Olivia Storelli, and Andrew Stevens are leading voices in the discourse on AI governance and guardrails, each bringing a unique perspective. Bill, a leader in brand management and consulting, views AI governance as essential for leveraging AI’s potential, emphasizing the need for decentralized decision-making and strategic oversight to ensure safety and strategic foresight. Olivia, CEO of Sakura Sky, underscores the importance of aligning strategy with practical technology execution, advocating for governance as a means to achieve rapid value while maintaining safety and innovation. Andrew, an expert in cloud technology, highlights the need for governance to manage AI’s risks and liabilities, calling for executive leadership to define permissible data use and decision-making to foster a robust, accountable AI implementation. Together, they stress the importance of clear guidelines, organizational readiness, and leadership involvement in navigating the complexities of AI adoption and ensuring its safe and effective integration into business operations.

Key highlights:

  • AI governance is crucial for safe and efficient deployment of artificial intelligence systems in organizations.
  • Collaboration and a mindset shift towards compliance professionals as enablers are essential for safe AI adoption.
  • AI compliance impacts trust, fairness, and security within organizations.
  • Leadership, accountability, and culture are key to success in AI projects.
  • A phased approach with executive sponsorship is crucial for implementing the AI roadmap.

Resources:

Download the AI Executive Whitepaper:

Text the word PLAYBOOK to 415.960.1161. 

or

Visit https://whitepaper.download/

  • Websites

https://roeblingstrauss.com/

https://www.sakurasky.com/

LinkedIn 

LinkedIn: Bill Sanders

LinkedIn: Olivia Storelli

LinkedIn: Andrew Stevens

Books

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Innovation in Compliance

Innovation in Compliance: 10+1 Commandments: A Moral Code for AI Ethics in Business

Innovation comes in many forms, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, host Tom welcomes Cristina DiGiacomo, founder of 10P1 Inc.

Cristina has an extensive background in communications, business, and practical philosophy. Cristina introduces her ’10+1 Commandments,’ a set of ethical guidelines for human interaction with artificial intelligence. They discuss the compelling need to integrate these principles into business compliance and governance frameworks. The commandments aim to provide a high-level, universal, and perpetual moral code that addresses the risks and ethical considerations of AI in the corporate world. Cristina emphasizes the importance of maintaining ethical AI practices amidst the evolving regulatory landscape.

Key highlights:

  • Philosophy in Everyday Life
  • Ancient Wisdom and Modern Application
  • The 10+1 Commandments Explained
  • Applying the Commandments in Business
  • Governance and Ethical AI

Resources:

Cristina DiGiacomo on LinkedIn

Website-10+1 

Categories
FCPA Compliance Report

FCPA Compliance Report – Nicole Di Schino on Harnessing AI for Compliance: Governance, Risks, and Best Practices

Welcome to the award-winning FCPA Compliance Report, the longest-running podcast in compliance. In this episode, Tom welcomes Nicole Di Schino, Principal Compliance Services Consultant at Diligent’s Spark Compliance Group, to discuss how best to harness AI for your compliance regime through 2026 and beyond.

Nicole and Tom discuss the critical importance of AI governance, compliance, and modern GRC. They cover practical steps for developing comprehensive compliance programs, emphasizing the necessity for AI risk assessments, the establishment of AI governance committees, and the implementation of human oversight in AI processes. Nicole highlights the intrinsic risks of AI, including privacy concerns and AI bias, and shares her personal experiences with AI’s impact in educational settings. Tom underscores the role of compliance education, advocating for the broader view of compliance as an ambassadorial and academic function. This session also explores the integration of AI into compliance workflows and the essential role of board and committee oversight.

 

Resources:

Nicole Di Schino on LinkedIn

Diligent Website

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
AI Today in 5

AI Today in 5: December 4, 2025, The Microsoft Blips Edition

Welcome to AI Today in 5, the newest edition of the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Does AI portend the end of the law/consulting firm pyramid? (FT)
  2. Strengthening AI strategies with proactive compliance. (WSJ)
  3. Microsoft stock dips on the news. (CNBC)
  4. Salesforce touts AI adoption. (Bloomberg)
  5. Strong AI governance can foster innovation. (Bloomberg)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Blog

Embedding Ethics in the AI Lifecycle

Embedding ethics into the AI lifecycle is not an abstract exercise. It is a practical, repeatable discipline that mirrors the work of corporate compliance. It requires a structured approach, clear accountability, and documented evidence of good governance. Most importantly, it requires compliance professionals to be at the table from the very beginning. Ethics in AI cannot be retrofitted at the end of a development cycle. It must be built in from step one.

Today, I want to examine the ethical checkpoints at each stage of the AI lifecycle and highlight where corporate compliance functions must lead. The goal is to help you build a stronger, more resilient program while demonstrating to regulators and stakeholders that your AI governance is real and operational.

Ethics in Data Sourcing

All ethical AI begins with ethical data. You cannot build a responsible model on a flawed or contaminated foundation. Data sourcing is the earliest point at which compliance becomes critical. First, ensure that the lawful basis, ownership, and rights of use are fully documented for every dataset. This is both an ethical issue and a regulatory one. Next, require a structured review for Personal Identifiable Information (PII) and Protected Health Information (PHI). If the dataset contains sensitive personal information, ensure minimization and purpose limitation principles are applied.

Ethical review also requires looking beyond legality. You must ask a deeper question: does this data reflect the populations on whom the model will act? If certain groups are underrepresented or misrepresented, there is a direct ethical and operational risk. This is where compliance can partner with data teams to conduct bias hotspot reviews and remediation before training begins.

Ethics in Model Training

Once data enters the training process, the focus shifts to how the model is built. Ethical model training emphasizes transparency, reproducibility, and clear accountability. For compliance professionals, this is a familiar structure. At the beginning of training, require a Model Card version zero. This document describes the model’s intended purpose, its users, and its limitations. Think of it as the model’s job description and risk profile. Without this baseline documentation, the organization has no ethical framework for evaluating the model later.

Compliance should also ensure the organization maintains a training bill of materials. Regulators and external auditors will expect clarity on what data, tools, seeds, configurations, and models fed the system. Ethical governance means that if something goes wrong, the organization can retrace its steps and identify the source. Finally, ensure that risks identified during training are assigned owners in the risk register. Ethical accountability requires clear signatures, not vague acknowledgment.

Ethics in Validation and Testing

No model should progress to deployment without a rigorous validation and ethical safety review. At this stage, you are no longer asking whether the model works. You are asking whether the model works in a way that is fair, safe, compliant, and aligned with corporate values. Compliance professionals should insist on structured red teaming for safety, privacy leakage, and discriminatory outputs. Ethical governance requires testing for misuse and unintended consequences, not simply functional performance.

Equally important is the articulation of pass/fail thresholds aligned with the organization’s risk tolerance. If a model shows drift toward unethical outcomes during testing, the organization must be prepared to pause or rework it. Ethics without enforcement is merely a suggestion. Legal review is also essential at this stage. Intellectual property rights, export controls, sector regulations, and customer contract obligations must all be considered. The organization’s ethical responsibility extends to ensuring its models do not inadvertently violate the law or expose users to regulatory scrutiny.

Ethics in Deployment

Deployment is the point at which AI moves from the laboratory to real-world use. Ethical deployment requires safeguards that prevent inappropriate access, misuse, confusion, or misinterpretation. Role-based and environment-based access controls are essential. No one should have access to modify or use a model unless there is a documented business justification. Ethical governance also requires that user disclosures clearly explain the model’s capabilities, limitations, and data use practices. Users should never be misled into believing a system can do something it cannot.

Canary rollouts and automated rollback mechanisms are additional ethical guardrails. They allow organizations to detect unintended consequences early and reverse course before harm spreads widely. Compliance should also ensure that third-party vendors and service providers follow equivalent ethical and governance controls. You are ultimately responsible for the ethical risks you outsource.

Ethics in Monitoring

Ethical oversight does not end when a model goes live. Ongoing monitoring is essential. Models drift. Data shifts. User populations change. A model that was ethical yesterday can become problematic tomorrow. Ethical monitoring means tracking for bias, accuracy degradation, safety issues, and misuse. It also implies routing alerts not only to engineering, but directly to compliance and risk. Ethics is not solely a technical matter. It is a governance responsibility.

Incident response is another ethical requirement. Organizations must maintain a defined, repeatable process for identifying, containing, and resolving AI-related harms. If something goes wrong, you must be prepared to act quickly and transparently.

Ethics in Governance

Finally, ethics must be embedded in the organization’s AI governance structure. Ethical AI cannot depend solely on goodwill or ad hoc decision-making. Clear role definitions, evidence documentation, and leadership engagement must support it. A formal Responsible, Accountable, Consulted, and Informed (RACI) structure for each lifecycle stage ensures accountability. Board-level reporting ensures visibility. Annual independent audits ensure credibility. Ethical AI requires not only doing the right thing but also demonstrating it.

As with all compliance disciplines, documentation is your first line of defense. Maintain Model Cards, testing evidence, monitoring logs, and decision memos. Ethical governance cannot be proven without records. The work is ongoing and iterative. Ethical AI is not a destination. It is a continuous commitment woven into every operational step. Compliance professionals are uniquely suited to lead this work because we understand systems, controls, and organizational behavior. Ethical AI is compliance by another name.

Five Key Takeaways for the Compliance Professional

1. Ethical AI begins with ethical data. Ethical governance always starts with the quality, origin, and integrity of the data used to train and inform an AI system. Inaccurate, incomplete, unlawfully sourced, or unrepresentative data introduces bias and distortion before a single line of code is written. Compliance professionals must ensure that lawful bases, consent, ownership, and use rights are fully documented, and that sensitive information is minimized and properly protected. Ethical data sourcing also requires evaluating demographic representation and identifying potential bias hotspots early. When data is handled ethically, the entire lifecycle is strengthened, reducing long-term operational, regulatory, and reputational risks.

2. Documentation is an ethical control. Good documentation is not busywork. It is the backbone of ethical AI and a critical indicator of organizational seriousness. Model Cards provide transparency regarding purpose, intended users, limitations, and performance boundaries. Risk registers assign ownership and ensure accountability throughout development, deployment, and monitoring. Audit trails create the evidentiary record that regulators and external stakeholders expect when evaluating whether decisions were responsible, compliant, and well-governed. Without documentation, an organization cannot show that it understood the risks of a model or acted responsibly in response to them. Ethical AI requires a traceable, repeatable set of records that tells a clear story of control and oversight.

3. Ethical validation requires testing. Validation is often treated as a technical gate, but ethical AI requires a far broader examination of how a model behaves under real-world stress. Compliance teams must ensure models are exposed to adversarial testing, red-team challenges, privacy leak assessments, and discrimination checks. A model that performs with high accuracy in ideal conditions may fail ethically when confronted with edge cases or bad actors. Ethical validation demands looking not only at what the model is designed to do, but at what it might inadvertently do. Only by testing for harm, misuse, and unanticipated outcomes can organizations prevent downstream risks and protect users.

4. Deployment must include safeguards. Ethical deployment is the bridge between controlled development environments and unpredictable real-world use. Safeguards such as role-based access controls, environment segregation, and capability restrictions ensure the model is used appropriately. User disclosures prevent misunderstanding by making limitations, risks, and data practices clear. Deployment controls must also account for third parties. If a vendor, integrator, or partner interacts with the model, they must uphold equivalent governance standards. Ethical responsibility does not end at the organizational boundary. Compliance oversees these safeguards to ensure that the model behaves as expected, users are not misled, and vulnerabilities are not introduced through poor operational controls.

5. Ethical monitoring is continuous. Ethics in AI is not solved at launch. Models evolve as data, user behavior, and external conditions shift. Continuous monitoring detects drift, reintroduction of bias, system degradation, and misuse patterns before harm spreads. Compliance plays a central role by ensuring real-time alerts flow to appropriate stakeholders, not solely to engineering teams. Incident response frameworks allow the organization to act quickly, document remedial action, and learn from failures. Regular reporting to senior leadership and the board reinforces accountability and aligns AI behavior with organizational values. Ethical monitoring is the mechanism that keeps AI trustworthy long after deployment.

If compliance does not lead to ethical AI governance, someone else will. It is time for the compliance to step forward.

If you would like a checklist for Embedding Ethics into the AI Lifecycle, leave us a Voice Mail.

Categories
Blog

20 Questions Every Board Should Ask About AI

In boardrooms around the world, one theme now appears with more regularity than cyber risk, M&A uncertainty, or even financial performance. That topic is artificial intelligence. Not the lofty philosophical debate about whether machines will overtake human judgment, but the immediate, pragmatic question every director is trying to solve: How do we oversee AI in a way that protects the enterprise, unlocks value, and keeps regulators out of the boardroom?

For compliance professionals, this is a defining moment. AI risk has become the newest frontier where the board relies heavily on the compliance function to guide them. Sometimes with clarity, sometimes with guardrails, and occasionally with a well-timed reality check. This is the type of risk that exposes governance gaps quickly, and the questions the board asks, or fails to ask, will determine whether the company thrives in the age of AI or becomes the following cautionary tale.

Today, I outline 20 critical questions that every board should ask about AI. Think of them not simply as oversight prompts but as governance accelerators. Each one creates visibility, accountability, and structure. Those three elements are the foundation of every effective compliance program.

1. What are our highest-impact AI use cases, and who owns them?

Boards cannot oversee what they cannot see. The first and arguably most crucial step is obtaining a clear inventory of where AI is embedded in operations, not at a conceptual level, but with owners, systems, and risk ratings attached. When accountability is vague, risk grows quietly in the background.

2. How does AI support our strategic objectives and create measurable value?

AI is not a magic wand. It must support strategy, not distract from it. Boards should ask whether AI materially improves revenue, reduces cost, enhances safety, increases accuracy, or strengthens customer outcomes. If the answer is ambiguous, the company may be deploying AI for the wrong reasons.

3. What data powers these systems, and do we have the legal and ethical rights to use it?

Data is the fuel for AI, but not all data is created or sourced equally. Boards should expect clarity on licensing rights, privacy implications, and any limitations on the use and reuse of data. If data lineage is unclear, the company’s regulatory exposure may be far greater than it realizes.

4. How are we assessing and mitigating bias in both data and outcomes?

Bias is not only a fairness issue. It poses operational, legal, and reputational risks. Boards should see a methodology, not simply an aspiration. That includes periodic testing, remediation procedures, and documentation that can withstand scrutiny from regulators, auditors, or litigators.

5. What guardrails prevent employees from entering sensitive information into generative AI tools?

Most AI failures begin with human error. Boards should understand which safeguards are currently in place, including policies, training programs, and technical restrictions, and how the company tests their effectiveness.

6. What is our model validation process before deployment?

Deploying unvalidated models, or worse, models validated exclusively by developers, invites significant risk. Boards should confirm that model validation includes accuracy testing, robustness checks, and cross-functional review involving compliance, legal, risk, and data science.

7. How do we monitor for model drift or degraded performance over time?

AI is not static. Models evolve, environments shift, and accuracy degrades. Ongoing monitoring is essential. Boards should request a drift detection plan that includes clear thresholds, well-defined triggers, designated responsible owners, and documented response actions.

8. What is our incident response plan for AI failures, hallucinations, or data leakage?

AI failures rarely resemble traditional IT outages. They can be subtle, gradual, or hidden until significant damage occurs. A strong incident response plan clarifies roles, timelines, escalation paths, and expectations for communication with customers and regulators. Boards should insist on a rehearsal, not merely a promise.

9. How are we documenting AI-related decisions?

When regulators come calling, documentation becomes destiny. Boards should ensure that decisions tied to high-impact AI models are recorded in a manner that demonstrates thoughtful oversight, rather than blind reliance on automation.

10. Which AI regulatory regimes apply to us across global markets?

The regulatory landscape is evolving rapidly. The EU AI Act, sector-specific guidance from the United States, China’s AI regulations, and new frameworks emerging in Australia, Brazil, Singapore, and the United Kingdom are just a few examples. Boards should expect a regulatory heat map that outlines exposure, obligations, and enforcement priorities.

11. How do we manage the risk associated with third-party AI vendors and model providers?

Vendors introduce significant risk, particularly when foundation models or APIs change without notice. Contracts must include audit rights, IP protections, confidentiality provisions, and mechanisms for monitoring downstream risk. Boards should look for a vendor governance framework, not a spreadsheet with logos.

12. What training have employees received on the responsible use of AI?

Employees cannot follow principles they do not understand. Boards should expect role-based training with regular refreshers, testing, and usage monitoring, rather than one-time videos or superficial check-the-box modules.

13. How do we ensure human oversight for high-impact or high-risk decisions?

This is where compliance delivers real value. “Human in the loop” cannot simply mean that a person glanced at a dashboard. It means the right individuals reviewed the right decisions with clarity on when they are obligated to intervene.

14. What KPIs tell us whether our AI systems are performing safely and as intended?

Boards should expect dashboards containing more than accuracy scores. KPIs should include incident counts, time-to-remediation, drift flags, bias findings, and operational impacts. What the company measures reveals what the company values.

15. What controls protect AI models and proprietary data from cyber threats?

AI significantly expands the attack surface. Models can be stolen, manipulated, or poisoned. Boards should see evidence of hardened access controls, encryption, logging, and monitoring, along with procedures for handling prompt-injection attacks and adversarial inputs.

16. How do we ensure transparency with customers, employees, and regulators when AI is used?

Transparency is becoming a regulatory expectation in many jurisdictions. Boards should verify whether AI disclosures are clear, accurate, and accessible to users, rather than being hidden in dense terms of service.

17. Are we over-relying on AI in any mission-critical processes?

AI concentration risk is real. When too many decisions or functions depend on a single model or vendor, the entire enterprise becomes fragile. Boards should evaluate whether redundancies exist and whether a single point of AI failure could create systemic risk.

18. What ethical principles guide our AI development and deployment?

Ethical frameworks only matter when they are embedded in daily processes and decision-making. Boards should expect evidence that ethical considerations influenced model selection, data sourcing, vendor evaluation, and deployment controls.

19. How is Internal Audit providing independent assurance over AI?

Internal Audit must play a role. AI risk touches processes, data, controls, vendors, and governance. These are areas Internal Audit already understands well. Boards should expect AI to be included in the annual audit plan, supported by a structured methodology.

20. What investments are required to manage AI risk in the next 12 months?

Boards appreciate transparency, not surprises. AI governance necessitates ongoing investment in personnel, skills, monitoring tools, testing environments, and data management capabilities. If AI grows without proportional governance funding, the company creates risk rather than value.

Why These Questions Matter Now

We are entering an era in which regulators expect boards to demonstrate active oversight of AI, just as they do for cybersecurity, financial controls, and data privacy. Gone are the days when AI could be treated as an IT experiment or a futuristic curiosity. Today, it sits squarely in the center of corporate governance. This means compliance oversight is required. For compliance professionals, this is an opportunity to step forward and provide structure. We can shape the conversation, establish frameworks, and guide leadership toward responsible adoption and implementation. These 20 questions give the boards the clarity they need and ensure compliance with the influence it deserves.

AI presents extraordinary potential, but potential without oversight becomes risk. Compliance professionals can ensure that the board asks the right questions, receives the necessary information, and establishes the appropriate controls to ensure effective oversight. In the age of AI, strong governance is not simply a competitive advantage. It is a survival strategy.

If you would like the whole 20 Question list, please leave us a Voicemail.

Categories
Compliance Tip of the Day

Compliance Tip of the Day – The Board and an AI Framework for Governance

Welcome to “Compliance Tip of the Day,” the podcast that brings you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements. Whether you’re a seasoned compliance professional or just starting your journey, our goal is to provide you with bite-sized, actionable tips to help you stay ahead in your compliance efforts. Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law. Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

This week, we continue our look at Board issues. We continue to consider how BODs need to think through AI governance. Today, we will consider a framework for AI governance.

For more on this topic, check out The Compliance Handbook, a Guide to Operationalizing your Compliance Program, 6th edition, which was recently released by LexisNexis. It is available here.

Categories
AI Today in 5

AI Today in 5: August 11, 2025, The ACHILLES Project Episode

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

  • Will the ACHILLES Project simplify AI regs in the EU? (InnovationNewsNetwork)
  • AI – data privacy and governance in pharma. (EPR)
  • Compliance risks with AI integration. (InsuranceBusinessMag)
  • GenAI for tax and customs compliance. (IMF)
  • Will GenAI end ‘check the box’ compliance? (CCI)

For more information on the use of AI in compliance programs, see Tom Fox’s new book, Upping Your Game. You can purchase a copy of the book on Amazon.com.