Categories
Hill Country Authors

Hill Country Authors Podcast: Dark Texas: A Worst-Case Look at the Texas Power Grid – Through Fiction

Welcome to a new season of the award-winning Hill Country Authors Podcast, sponsored by Stoney Creek Publishing. In this podcast, Hill Country resident Tom Fox visits with authors who live in and write about the Texas Hill Country. In this episode, host Tom Fox interviews fellow UT grad Charles J. Petrie about his novel Dark Texas, inspired by his frustration with articles claiming the Texas power grid failure during the ‘Snowpocolips’ “could have been worse.”

Petrie, a PhD in computer science with research experience, explains he dug into grid resilience and found deeper risks, including reliance on gas-fired generation even though gas pipeline pressure depends on electricity via compressors, and the vulnerability of black start capability: he says 82% of Texas black start generators were inoperable during the event, with some unable to run without electricity or stored fuel oil, and others not maintained in a competitive market. Petrie chose fiction because a technical treatment became too complex and a novel could make people care; he describes characters taking over the writing, cites influences and craft lessons from various authors, shares he’s drafted a sequel prompted by a dark epilogue, and recounts publishing with Stoney Creek Publishing after 50 agent rejections.

Key highlights:

  • Why Write Dark Texas
  • Texas Grid Risks Explained
  • Black Start Breakdown
  • Turning Research Into Fiction
  • Characters Take Over
  • Authors and Writing Lessons
  • Finding a Publisher

Resources:

Dark Texas on Stoney Creek Publishing

Connect with Charles on Facebook

Learn more about Stoney Creek Publishing

Podcast Cover Art

Nancy Huffman Fine Art

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
All Things Investigations

ATI In-House Insights: Navigating Internal Investigations: A Conversation with Mike Gill

Welcome to the Hughes Hubbard Anti-Corruption & Internal Investigations Practice Group’s podcast, All Things Investigation. This is a special series featuring sights from in-house practitioners, hosted by Mike DeBernardis. In this podcast, Mike D visits with Mike Gill, Assistant GC and Director of Investigations at HII, on conducting internal investigations from an in-house perspective in a defense shipbuilding environment.

Gill says the first concern when allegations arise is immediate safety risk to employees and the integrity of work affecting Navy and other military customers, followed by designing an investigation that will be viewed as timely, accurate, and credible. He emphasizes scoping, planning, selecting the right team (including technical experts and, sometimes, outside counsel), and establishing disciplined communication and reporting lines to management and customers while protecting privilege. Gill highlights building employee trust through fair processes, enforcement of anti-retaliation policies, and appropriate follow-up, and notes common mistakes: jumping to conclusions, failing to bound scope, and inadequate planning.

Key highlights:

  • Safety First Priorities
  • Architecting the Investigation
  • Scope Planning and Team
  • Protecting Privilege
  • Culture and Fairness
  • Anti-Retaliation Trust
  • Top Mistakes to Avoid

Resources:

Hughes Hubbard & Reed website

Mike DeBernardis

Mike Gill on LinkedIn

Categories
Daily Compliance News

Daily Compliance News: April 2, 2026, The Hung Jury Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Was Iran behind the thwarted BoA attack? (FT)
  • ABA can sue Trump over the illegal banning of law firms. (Reuters)
  • FirstEnergy case ends with a hung jury. (Ohio Capital Journal)
  • Indonesia detains coal tycoon over corruption. (Bloomberg)
Categories
GSK in China: 13 Years Later

GSK In China: 13 Years Later – Where Was the Board? Director Oversight and Doing Business in China

Thirteen years after the GSK China scandal exploded onto the global stage, its lessons remain as urgent as ever for compliance professionals and business leaders. In this podcast series, we revisit the case not simply as corporate history, but as a living cautionary tale about culture, incentives, third parties, investigations, and governance. Each episode explores what went wrong, why it went wrong, and how those failures still echo in today’s compliance and ethics landscape. Join me as we unpack the scandal and draw practical lessons for building stronger, more resilient organizations. This episode examines why major bribery scandals occur “under the board’s nose,” using GSK as a launching point to explain directors’ legal and practical compliance responsibilities.

It traces oversight duties under Delaware law, highlighting Caremark’s good-faith duty to ensure information and reporting systems, Stone v. Ritter’s standard for liability for sustained or systematic oversight failure, and the business judgment rule. It contrasts “check-the-box” programs with risk-based oversight via the Piat case, where formal compliance masked illegal conduct embedded in business plans. The discussion ties board expectations to FCPA guidance hallmarks, emphasizing tone at the top, empowered compliance functions with direct board access, DOJ/SEC scrutiny, and SEC Reg. S-K 407 risk-oversight disclosures, and potential disgorgement. It then focuses on China as a high-risk environment, third-party intermediary exposure, and M&A “deal-breaker” dilemmas requiring rigorous pre- and post-acquisition diligence, concluding with the paradox that boards may be incentivized toward plausible deniability. Our hosts are Timothy and Fiona.

Key highlights:

  • Compliance Starts at the Top
  • Caremark Duty Explained
  • FCPA Hallmarks for Boards
  • Passive Board Era Ends
  • Plausible Deniability Paradox

Resources:

GSK in China: A Game Changer for Compliance on Amazon.com

GSK in China: Anti-Bribery Enforcement Goes Global on Amazon.com

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Ed. Note: Notebook LM created the voices of the hosts, Timothy and Fiona, based on text written by Tom Fox

Categories
Kerr250 Podcast

The Kerr250 Podcast: Kerr250’s Evolution Car Show: Celebrating America’s 250th Birthday and Community Resilience

Kerr250 is a community-focused podcast dedicated to celebrating America’s 250th birthday through the people, businesses, traditions, and events of Kerr County. As our nation marks this historic anniversary on July 4, 2026, Kerr250 will highlight local celebrations and community efforts that bring this milestone to life. Each episode will feature conversations with local leaders, business owners, organizers, volunteers, and proud citizens who are helping make Kerr County a vibrant part of this national moment. The podcast will explore how history, patriotism, service, and community pride come together in one county that believes America’s strength has always come from its people. Kerr250 is where Kerr County honors the past, celebrates the present, and helps inspire the future. In our inaugural episode, Tom Fox visits with Doug Hetzler, Kerr250 Committee Member and head of the Evolution Car Show.

The “Evolution Car Show” will display cars in chronological order to show the evolution of American auto manufacturing. The event will be a free show at Louise Hays Park, featuring food trucks and vendors, aimed at rebuilding positive community memories after the flood. Doug describes other Kerr250 initiatives, including DAR-supported flag displays on Sydney Baker Street, commemorative tree sales with medallions, and a Fourth on the River program split into a memorial and a celebration. He emphasizes broad community participation, provides the website (kerr250.com) and contact (info@kerr250.com), and encourages inclusion of everyday vehicles, not just high-end classics.

Highlights include:

  • Vision for Evolution Car Show
  • What Is Kerr250
  • How to Join and Website
  • Why 250 Years Matter
  • Car Show Open to All Cars

Resources:

Kerr250 website

Evolution Car Show

Categories
AI Today in 5

AI Today in 5: April 2, 2026, The Just Say No Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Responsible AI in the regulatory framework. (Wealth Management)
  2. HHS moves AI in healthcare oversight. (GovInfo Security)
  3. Creating an AI Incident and Response Plan. (NationalReview)
  4. Where is AI in healthcare headed? (Futurism)
  5. Saying No in GenAI projects. (FinTechGlobal)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Hill Country Authors

Hill Country Authors Podcast: Dark Texas: A Worst-Case Look at the Texas Power Grid—Through Fiction

Welcome to a new season of the award-winning Hill Country Authors Podcast, sponsored by Stoney Creek Publishing. In this podcast, Hill Country resident Tom Fox visits with authors who live in and write about the Texas Hill Country. In this episode, host Tom Fox interviews fellow UT grad Charles J. Petrie about his novel Dark Texas, inspired by his frustration with articles claiming the Texas power grid failure during the Snowpocalypse “could have been worse.”

 

Petrie, a PhD in computer science with research experience, explains he dug into grid resilience and found deeper risks, including reliance on gas-fired generation even though gas pipeline pressure depends on electricity via compressors, and the vulnerability of black start capability: he says 82% of Texas black start generators were inoperable during the event, with some unable to run without electricity or stored fuel oil, and others not maintained in a competitive market. Petrie chose fiction because a technical treatment became too complex and a novel could make people care; he describes characters taking over the writing, cites influences and craft lessons from various authors, shares he’s drafted a sequel prompted by a dark epilogue, and recounts publishing with Stoney Creek Publishing after 50 agent rejections.

Key highlights:

  • Why Write Dark Texas
  • Texas Grid Risks Explained
  • Black Start Breakdown
  • Turning Research Into Fiction
  • Characters Take Over
  • Authors and Writing Lessons
  • Finding a Publisher

Resources:

Dark Texas on Stoney Creek Publishing

Connect with Charles on Facebook

Learn more about Stoney Creek Publishing

Podcast Cover Art

Nancy Huffman Fine Art

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Blog

AI Risk Appetite: The Conversation Boards Are Not Having

There is a quiet but serious problem developing in boardrooms around AI. Directors are hearing about innovation. They are hearing about productivity gains. They are hearing about competitive pressure, transformation, and speed. What they are not hearing enough about is risk appetite. That is the missing conversation.

Most companies are already using AI in one form or another. Some are deploying enterprise tools. Some are approving vendor solutions with embedded AI. Some are allowing business units to experiment in a controlled fashion. Some, of course, are doing all of the above and pretending it is a strategy. Yet for all the discussion about adoption, there has been far less focus on a basic governance question: what level of AI-driven decision risk is acceptable for this company? That is not a technical question. It is a board question.

The Risk Appetite Gap in AI Governance

AI is not simply another software purchase. It can influence recommendations, rankings, forecasts, summaries, classifications, and decisions. It can operate upstream from business judgments or directly within them. It can affect customer communications, hiring decisions, compliance monitoring, internal investigations, financial analysis, and reporting workflows. So the central governance challenge is not whether AI exists in the enterprise. It is how much authority the company is willing to give it, in what contexts, with what controls, and with what margin for error. If you do not define that, you do not have AI governance. You have AI optimism.

What Is AI Risk Appetite?

At its core, AI risk appetite is the level and type of AI-related risk an organization is willing to accept in pursuit of business value. That includes a series of questions boards ought to be asking. How much error is acceptable in AI-generated output before a human must intervene? Which uses are low-risk productivity enhancements, and which are sensitive, consequential, or reputation-threatening? In what contexts can AI make recommendations only, and in what contexts can it influence or automate action? How much dependence on opaque third-party models is acceptable? What degree of explainability does the company require for different use cases? When does speed stop being a benefit and start becoming exposure?

Many boards are currently discussing AI deployment without ever discussing AI tolerance. That is like approving a global third-party strategy without deciding what level of distributor risk, sanctions exposure, or bribery risk the company is prepared to accept. No compliance professional would recommend that. Yet in AI, organizations do versions of it every day.

Why Boards Avoid the Conversation

There are several reasons boards have been slow to engage on AI risk appetite.

First, the technology moves fast, and the terminology can become a fog machine. Directors do not want to look uninformed, so discussions often stay broad and strategic. Second, management may not yet have the internal inventory or classification framework needed to make a risk-appetite conversation concrete. Third, many companies are still in an experimentation phase, which creates the illusion that formal governance can come later. Fourth, there is a natural tendency to believe AI risk belongs to IT, legal, or security, rather than to enterprise oversight.

AI risk appetite cannot be delegated away because it intersects with business judgment, ethics, records, privacy, data governance, resilience, and culture. It cuts across functions. It also cuts across reputational boundaries. If a company uses AI in a way that produces unfair results, faulty decisions, poor disclosures, or customer harm, nobody is going to say, “Well, that was a technical issue, so the board need not have been involved.” Boards do not get a hall pass when the governance system is missing.

The Conversations Boards Need to Be Having

Risk Map. The first conversation is about where AI sits on the company’s risk map. Is AI a productivity tool, a strategic platform, a decision-support capability, or some combination of all three? The answer matters because it affects the level of oversight. A company using AI for internal drafting support faces one type of exposure. A company using AI in customer-facing interactions, underwriting, hiring, fraud detection, or compliance monitoring faces another challenge.

Decision Significance. Boards need to ask where AI is being used in decisions that affect legal rights, financial outcomes, customer treatment, employment status, compliance judgments, or public disclosures. Not all uses are equal. A board that treats AI use in marketing copy the same as AI use in employee discipline is not governing. It is lumping.

Acceptable Error and Human Review. Boards should ask: what level of inaccuracy can the company tolerate in a given use case, and who is accountable for checking the output before action is taken? Human oversight has become one of those phrases everybody likes, and few define. Directors need something more disciplined. When is review mandatory? What does a meaningful review look like? What evidence shows that the reviewer is not simply rubber-stamping machine output?

Data and Model |Dependency. What data is being used? Who owns it? Who has the right to it? How current is it? Are third-party vendors changing capabilities under existing contracts? Is the company becoming dependent on systems it does not fully understand or cannot easily audit? Boards should not need to know how the engine works, but they absolutely need to know whether the company is driving a car with uncertain brakes.

Incident Tolerance and Escalation. What types of AI failures must be reported to senior leadership or the board? A hallucinated internal memo may be embarrassing. A flawed AI-assisted hiring screen or customer communication may be far more serious. The board should ensure management has defined materiality thresholds before an incident occurs, not after the headlines begin.

The CCO’s Role in Shaping the Conversation

This is where compliance officers can be enormously helpful.

The CCO is often the person in the enterprise most experienced at turning abstract risk into operating discipline. Compliance knows how to frame risk-based governance. It knows how to create escalation structures, policy frameworks, investigations protocols, and oversight dashboards. It knows that culture and control design matter just as much as rules. Here are four ways to do so.

  1. A CCO can help management develop a tiered inventory of AI use cases. This is essential. Boards cannot discuss appetite in the abstract. They need to see the map. Which uses are low risk? Which are medium? Which are high? Which are prohibited absent specific approval?
  2. Compliance can help translate legal, ethical, and operational concerns into board-level language. Directors do not need a seminar on neural networks. They need clear framing around consequences, control points, accountabilities, and thresholds.
  3. A CCO can help build governance around human review, documentation, and escalation. If the company says a human is responsible, compliance can help test whether that responsibility is real, documented, and operational.
  4. Compliance can keep the conversation grounded in how people actually behave. Employees will choose convenience. Business teams will move quickly. Vendors will market aggressively. Managers may trust the generated output more than they should. A good compliance officer knows that policy must be built for actual human behavior, not ideal behavior.

Compliance as Risk Mitigation and Business Enablement

One of the enduring frustrations in compliance is that governance is often viewed as a speed bump until something goes wrong. AI gives us another chance to make the larger point. Governance does not slow innovation. Bad governance slows innovation by causing rework, distrust, remediation, and public embarrassment.

A well-defined AI risk appetite does the opposite. It gives the business clarity. It tells innovation teams where they can move quickly and where they must slow down. It helps procurement negotiate the right terms. It helps managers know when to escalate. It helps employees understand when they may rely on AI and when they must verify it. Most importantly, it gives the board a strategic rather than reactive basis for oversight.

That is compliance at its best. Not Dr. No, from the Land of “no,” but the function that makes responsible growth possible.

Final Thoughts

Boards need not fear AI. But they do need to govern it. And governance begins with clarity about appetite. If your board has discussed an AI opportunity but not AI tolerance, it has only had half the conversation. If your company has adopted tools but has not defined acceptable levels of error, autonomy, dependency, and oversight, it is operating on hope. Hope, as every compliance professional knows, is not a strategy and certainly not a control.

Here are the questions I would leave you with. Has your board defined what level of AI-driven decision risk it is willing to accept? Can management explain how that appetite changes across low-risk and high-risk use cases? And can your compliance function show, with evidence, whether the company is operating inside those lines? If the answer is no, then the conversation boards may be the most important AI conversation of all.