Board Oversight and Monitoring of AI Risks

As companies rapidly adopt artificial intelligence (AI), it becomes paramount to have robust governance frameworks in place. Not only can AI bring about vast business benefits, but it also carries significant risks—such as spreading disinformation, racial discrimination, and potential privacy invasions. In this episode of Corruption, Crime and Compliance, Michael Volkov dives deep into the urgent need for corporate boards to monitor, address, and incorporate AI into their compliance programs, and the many facets that this entails.

You’ll hear Michael talk about:

  • AI is spreading like wildfire across industries, and with it comes a whole new set of risks. Many boards don’t fully understand these risks. It’s important to make sure that boards are educated about the potential and pitfalls of AI, and that they actively oversee the risks. This includes understanding their obligations under Caremark, which requires them to exercise diligent oversight and monitoring.
  • AI is a tantalizing prospect for businesses: faster, more accurate processes that can revolutionize operations. But with great power comes great responsibility. AI also comes with risks, like disinformation, bias, privacy invasion, and even mass layoffs. It’s a delicate balancing act that businesses need to get right.
  • Companies can’t just use AI, they have to be ready for it. That means adjusting their compliance policies and procedures to their specific AI risk profile, actively identifying and assessing those risks, and staying up-to-date on potential regulatory changes related to AI. As AI grows, the need for strong risk mitigation strategies before implementation becomes even more important.
  • The Caremark framework requires corporate boards to ensure that their companies comply with AI regulations. Recent cases, such as the Boeing safety oversight, demonstrate the severity of the consequences when boards fail to fulfill their responsibilities. As a result, boards must be proactive: ensure that board members have the technical expertise necessary, brief them on AI deployments, designate senior executives to be responsible for AI compliance, and ensure that there are clear channels for individuals to report issues.

 

KEY QUOTES

“Board members usually ask the Chief Information Security Officer or whoever is responsible for technology [at board meetings], ‘Are we doing okay?’ They don’t want to hear or get into all of the details, and then they move on. That model has got to change.”

 

“In this uncertain environment, stakeholders are quickly discovering the real and significant risks generated by artificial intelligence, and companies have to develop risk mitigation strategies before implementing artificial intelligence tools and solutions.”

 

“Board members should be briefed on existing and planned artificial intelligence deployments to support the company’s business and or support functions. In other words, they’ve got to be notified, brought along that this is going to be a new tool that we’re using, ‘Here are the risks, here are the mitigation techniques.’”

 

Resources:

Michael Volkov on LinkedIn | Twitter

The Volkov Law Group

Leave a Reply

Your email address will not be published. Required fields are marked *

What are you looking for?