Artificial intelligence may be built on data, models, and code, but governance ultimately rests on people. For boards and Chief Compliance Officers, one of the most important questions is not only whether the organization has responsibly approved AI tools, but also whether employees are prepared to challenge them, report concerns, and apply human judgment when something does not look right. In many organizations, the earliest warning system for AI failure is not a dashboard. It is the workforce.
Over the course of this series, I have explored four critical governance challenges in AI: board oversight and accountability, strategy outrunning governance, data governance and privacy, and ongoing monitoring. This final blog post turns to the fifth and most underappreciated challenge of all: culture, speak-up, and human judgment.
Underappreciated because organizations often begin AI governance with structure in mind. They build committees, draft policies, classify risks, and establish approval gates. All of that is necessary. But structure alone is not sufficient. If the human beings closest to the work do not understand their role in AI governance, do not feel empowered to raise concerns, or begin to defer too readily to machine-generated outputs. The governance framework will be weaker than it appears on paper.
This is the point many companies miss. AI governance is not only about the technology. It is about whether the organization’s culture supports the responsible use of technology.
Employees Will See AI Failures First
In many companies, the first person to notice an AI problem will not be a board member, a Chief Executive Officer, or even a member of the governance committee. It will be an employee interacting with the tool in daily operations. It may be the customer service representative who sees the system generating inaccurate responses. It may be the HR professional who notices troubling patterns from an AI-supported screening tool. It may be the sales employee who sees a generative tool overstating product claims. It may be the finance professional who questions an automated summary that does not match underlying records. It may be the compliance analyst who sees a tool being used for an unapproved purpose.
That matters because early visibility is one of the most valuable protections a company can have. But visibility only becomes a control if employees know what to do with what they see. That is why culture is a governance issue. A workforce may spot the problem, but if employees do not understand that AI-related concerns are reportable, are unsure where to raise them, or believe management will ignore them, the warning system fails.
For boards and CCOs, that means AI governance cannot stop at policy creation. It must extend into behavior, reporting norms, and organizational trust.
Speak-Up Culture Is an AI Governance Control
Compliance professionals have long known that a speak-up culture is a control. It is often the first way a company learns of misconduct, process breakdowns, weak supervision, retaliation, harassment, fraud, or control evasion. The same principle now applies with equal force to AI.
Employees may observe biased outputs, inaccurate recommendations, privacy concerns, unexplained model behavior, misuse of tools, inappropriate reliance on machine-generated content, or efforts to bypass required human review. If they do not report those concerns, management may have no timely way to know what is happening.
This is where the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) remains highly instructive. The ECCP places substantial emphasis on whether employees are comfortable raising concerns, whether the company investigates them appropriately, and whether retaliation is prohibited in practice. Those same questions should now be asked in the context of AI. Does the company’s reporting framework explicitly include AI-related concerns? Are managers trained to recognize and escalate those concerns? Are reports investigated with the same seriousness as other compliance issues? Are employees protected if they raise uncomfortable questions about a tool the business wants to use?
If the answer is no, the company may have AI procedures, but does not yet have embedded AI governance in its culture.
Human Judgment Cannot Be Optional
One of the most significant risks in AI governance is not simply that a model will be wrong. It is that people will stop questioning it. AI systems can produce outputs quickly, fluently, and with apparent confidence. That creates a powerful temptation for users to over-trust the tool. When a system sounds polished, appears efficient, and reduces workload, people may assume that its conclusions deserve deference. This is precisely where governance needs the corrective force of human judgment.
Human judgment cannot be treated as a ceremonial step or a paper requirement. It must be meaningful. That means the people reviewing AI outputs must have the authority, time, training, and confidence to challenge those outputs when needed. A human review requirement that exists only on paper is not much of a safeguard. If reviewers are overloaded, insufficiently trained, or culturally discouraged from slowing the process, the control may be largely illusory.
Boards should care about this because one of the easiest mistakes management can make is to describe human oversight in governance documents without testing whether it is functioning in practice. CCOs should care because this is a classic compliance problem. A control may be designed elegantly but fail in daily operations because the supporting culture is too weak to sustain it.
Training Must Change with AI
A company cannot expect good judgment around AI if it has not trained people on what good judgment looks like. That means AI training should go beyond technical usage instructions. Employees need to understand what risks may arise, what concerns are reportable, what approved use looks like, what prohibited use looks like, and why human challenge matters. Managers need additional training because they are often the first informal escalation point when an employee raises a concern. If managers dismiss AI concerns as overreactions, inconveniences, or resistance to innovation, the speak-up system will quickly lose credibility.
Training should also be role-based. The risks faced by a customer-facing team may differ from those faced by teams in HR, legal, procurement, marketing, finance, or internal audit. A generic AI training module may create awareness, but it will not create the operational judgment needed in high-risk areas.
This is where the NIST AI Risk Management Framework provides practical value. NIST’s emphasis on governance is not limited to formal structures. It contemplates culture, accountability, and the need for organizations to support informed decision-making across the enterprise. ISO/IEC 42001 similarly reinforces the importance of organizational competence, awareness, and defined responsibilities. Both frameworks point to a critical truth: responsible AI use depends not only on controls over the technology, but also on the capabilities of the people who use and oversee it.
Managers Matter More Than Companies Often Realize
If culture is the operating environment of governance, managers are often its most important local translators. An employee may not begin by filing a formal report. More often, an employee may raise a concern informally with a supervisor or colleague. “This output does not seem right.” “I do not think we should be using it this way.” “This seems to be pulling in sensitive information.” “This recommendation may be biased.” “The human review is not really happening anymore.”
The manager’s response in that moment matters enormously. Does the manager take the concern seriously? Does the manager know it should be escalated? Does the manager see it as a governance issue or as resistance to efficiency? Does the manager understand the difference between a minor usability complaint and a potentially significant compliance concern?
This is why boards and CCOs should not think about speak-up solely in hotline terms. AI governance depends on the broader management culture. If supervisors are not equipped to receive and escalate AI concerns appropriately, many issues will die in the middle of the organization before they ever reach a formal channel.
Anti-Retaliation Must Be Real in the AI Context
There is another dimension that cannot be overlooked: the risk of retaliation. In some organizations, employees may hesitate to raise AI concerns because they fear being labeled anti-innovation, obstructionist, or not commercially minded. That creates a subtle but serious governance risk. If the corporate atmosphere celebrates rapid AI adoption without equally celebrating responsible challenge, then employees may conclude that silence is safer than candor.
This is why anti-retaliation messaging must be explicit in the AI context. The company should make clear that raising concerns about inaccurate outputs, misuse, privacy risks, unfairness, or control breakdowns is part of responsible business conduct. It is not a failure to embrace innovation. It is a contribution to the effective governance of innovation.
The CCO should ensure that AI-related concerns are incorporated into existing anti-retaliation frameworks, investigations protocols, and communications. Boards should ask whether employee sentiment data, hotline trends, and internal investigations provide any signal that people are reluctant to question AI initiatives. If the organization is moving aggressively on AI, it should be equally serious about protecting those who raise governance concerns about it.
Documentation and Escalation Still Matter
As with every other aspect of AI governance, culture and judgment must be integrated into the process. A company should document how AI-related concerns can be reported, how they are triaged, who reviews them, what escalation triggers apply, and how resolutions are tracked. Concerns about AI should not be dismissed as vague general complaints. They should be reviewable and analyzable over time.
This is essential not only for accountability but for learning. Patterns in employee concerns may reveal weaknesses in training, design, vendor management, access controls, or oversight. A single report may be an isolated event. Repeated concerns within a single function may point to a systemic governance problem. That is why speak-up is not just about receiving reports. It is about turning those reports into organizational intelligence.
The ECCP again offers a useful framework. It asks whether investigations are timely, whether root causes are examined, and whether lessons learned are fed back into the compliance program. AI governance should work the same way. A reported concern should not end with a narrow answer to the immediate complaint. It should prompt management to ask what the issue reveals about the broader governance environment.
Boards Must Model the Right Tone
This final point may be the most important. Culture is shaped by what leadership rewards, tolerates, and asks about. If the board only asks about AI efficiency, adoption, and speed, management will take the signal. If the board asks whether employees are raising concerns, whether human oversight is meaningful, whether managers are trained, and whether retaliation protections are working, management will take that signal as well.
For CCOs, this is a vital opportunity. The compliance function can help boards understand that governance is not only about structure and controls, but also about whether the organization has preserved the human capacity to question, escalate, and correct. In the AI context, that may be the most important governance capability of all.
Because in the end, even the most advanced system will not govern itself. An enterprise must govern it. That requires culture. It requires trust. It requires the courage to speak up. And it requires strong human judgment to look at an impressive output and still ask, “Is this right?”
The Human Side of Governance Is the Decisive Side
This final article brings the series back to a simple truth. AI governance is not only about what the company builds. It is about how the company behaves.
Boards may establish oversight. Management may create structures. Compliance may build controls. But if employees are not prepared to report concerns or exercise judgment, the organization will remain vulnerable. A strong AI governance program does not merely control the system. It empowers the people around the system to challenge it responsibly.
That is the human side of governance, and in many ways it is the decisive side.