Artificial intelligence (AI) has been heralded as one of the most transformative technologies of our time. It promises efficiency, productivity, and entirely new business models. Yet, as with any tool of such power, AI is both a friend and a foe. For corporate directors, compliance officers, and risk professionals, AI presents a dual challenge: leveraging its defensive strengths while preparing for its potential weaponization by malicious actors.
The National Association of Corporate Directors (NACD), in partnership with the Internet Security Alliance (ISA), has released a special supplement to its Directors’ Handbook on Cyber-Risk Oversight devoted entirely to AI in cybersecurity. It is a timely publication. As adoption rates soar, 72% of companies were already using AI in 2024, and the risks are accelerating just as fast. For the compliance community, the report provides a roadmap for oversight, governance, and practical questions boards must ask management.
AI as Both Force Multiplier and Risk Multiplier
On one side of the ledger, AI enhances cybersecurity by automating threat detection, reducing false positives, identifying malware, and analyzing oceans of log data. Used wisely, AI allows companies to “get ahead of theft”. This includes identifying vulnerabilities before criminals exploit them. Generative AI and large language models (LLMs), in particular, can speed detection, enrich threat indicators, and even suggest remediation steps.
However, these same capabilities are available to cybercriminals. AI lowers the barrier of entry for less sophisticated hackers, turbocharges phishing and social engineering campaigns, and allows nation-states to refine cyberattacks at scale. This duality makes AI unique: it amplifies both opportunity and risk simultaneously.
Oversight Imperatives for Boards
The handbook identifies four key imperatives for boards responsible for overseeing AI and cybersecurity.
1. Director of Education – Boards must commit to continuous learning about AI’s risks, benefits, and regulatory developments. Few leaders yet possess the technical grounding needed to appreciate AI’s implications.
2. Threat and Opportunity Awareness – Directors must understand not just the dangers but also the strategic benefits AI can bring.
3. Regulation and Disclosure – Boards must anticipate evolving rules and disclosure obligations. AI oversight will require the same level of rigor as financial and ESG reporting.
4. Board Readiness – Boards must ensure management builds governance structures, ethical use frameworks, and clear communication channels about AI’s role.
Compliance Lessons from the NACD AI in Cybersecurity Handbook
1. Third-Party and Supply Chain Risk Will Intensify
Boards are advised to scrutinize vendors’ AI tools and data sources. As the handbook emphasizes, AI models can be trained on data with questionable provenance, intellectual property, personally identifiable information, or even classified information. Using such models can expose organizations to liability. For compliance professionals, this means conducting enhanced due diligence on third-party AI systems. Ask vendors how they source training data, what models they use, and whether they have human oversight mechanisms in place to ensure quality. AI risk is now a key component of supply chain risk.
2. Transparency Is a Non-Negotiable
AI systems often function as “black boxes.” Their lack of explainability poses reputational and legal risks when decisions cannot be justified. Boards are urged to push for transparency in AI deployment, both internally and in customer-facing applications. For compliance professionals, this means incorporating explainability into your AI governance framework. Require documentation of training data, decision-making logic, and model limitations. If regulators ask, you must be able to demonstrate your homework.
3. Continuous Monitoring Is the New Standard
As highlighted in the AI Seven-Step Governance Program, AI oversight requires more than pre-deployment testing. Continuous monitoring, auditing, and retraining must occur throughout the lifecycle of AI tools to ensure their effective use. For the compliance professional, this means your program must move beyond “check-the-box” vendor certifications. Build ongoing monitoring and assurance processes. Think of AI oversight as dynamic, not static.
4. Regulation Will Come Fast and Furious
The NACD warns that while regulators often lag innovation by three to five years, the window for AI is already shortening. Boards relying on a “wait and see” approach will find themselves overwhelmed when rules arrive. Clearly, the compliance function must do more than wait for the regulators. Even if the US government were inclined to do so, the necessary political will would not exist to allow for an agreement. This means you should align your approach today with emerging frameworks, such as the EU AI Act, the NIST AI Risk Management Framework, and OECD principles. Position your company to demonstrate proactive governance.
5. Disclosure Expectations Will Rise
AI adoption carries disclosure obligations across transparency, risk assessment, and incident reporting. Boards must assume that regulators and investors alike will demand clear, timely disclosure of AI-related incidents and governance practices. Compliance must lead the way in your corporation to build AI into your disclosure controls and procedures now. Ensure incidents involving AI failures are reported with the same rigor as material cybersecurity breaches.
6. The Board Must Get Educated—and Fast
The handbook emphasizes director education. Boards that lack AI fluency will struggle to provide proper oversight. Worse, they may overestimate management’s ability to mitigate AI risks. You should encourage board training through NACD, Carnegie Mellon’s CERT program, or trusted third-party advisors. Education is no longer optional; it may well become a fiduciary duty.
7. Governance Structures Must Evolve
Some companies are considering dedicated AI committees, while others integrate AI oversight into existing audit or risk committees. Either way, boards need clear lines of accountability. The questions boards should be asking management are listed extensively in the handbook, including:
- How are competitors using AI?
- Do we need a Chief AI Officer?
- What is our exposure if adversaries use AI against us?
- Have we segregated training data to know its provenance?
- Are our policies aligned with the EU AI Act’s risk classifications?
Start these conversations today. Board agendas must include AI oversight as a recurring topic.
Building a Compliance Playbook for AI
The compliance professional can translate the NACD’s recommendations into a practical playbook for your program, incorporating the following key concepts.
- Embed AI governance early – Don’t bolt compliance onto AI projects after the fact. Integrate governance into design and procurement stages.
- Adopt a human-centered AI approach – Ensure AI is aligned with corporate values and ethical principles, not just efficiency goals.
- Use risk quantification – Treat AI risk like any other enterprise risk: quantify, compare, and integrate into ERM frameworks.
- Demand accountability – Require clear responsibility for AI oversight, whether it sits with the Chief Compliance Officer, CIO, or a new Chief AI Officer role.
- Engage regulators early – Use disclosure and transparency as tools to build trust with regulators and stakeholders.
The Handbook makes clear that AI in cybersecurity is not just a technology issue. It is an enterprise risk, a boardroom issue, and a compliance mandate. For compliance professionals, this means you must step into the AI oversight conversation.
As with the FCPA decades ago, regulators and stakeholders will expect companies to transition from a reactive to a proactive approach. The time to build frameworks, train directors, and embed oversight is now. AI, like every disruptive technology before it, will reward the prepared and punish the complacent. Compliance professionals are uniquely positioned to bridge the technical and governance divide. By applying lessons from the NACD handbook, we can ensure that AI becomes not just a tool for criminals but a force multiplier for integrity, trust, and resilience in the digital age.