Categories
Corruption, Crime and Compliance

Board Oversight and Monitoring of AI Risks

As companies rapidly adopt artificial intelligence (AI), it becomes paramount to have robust governance frameworks in place. Not only can AI bring about vast business benefits, but it also carries significant risks—such as spreading disinformation, racial discrimination, and potential privacy invasions. In this episode of Corruption, Crime and Compliance, Michael Volkov dives deep into the urgent need for corporate boards to monitor, address, and incorporate AI into their compliance programs, and the many facets that this entails.

You’ll hear Michael talk about:

  • AI is spreading like wildfire across industries, and with it comes a whole new set of risks. Many boards don’t fully understand these risks. It’s important to make sure that boards are educated about the potential and pitfalls of AI, and that they actively oversee the risks. This includes understanding their obligations under Caremark, which requires them to exercise diligent oversight and monitoring.
  • AI is a tantalizing prospect for businesses: faster, more accurate processes that can revolutionize operations. But with great power comes great responsibility. AI also comes with risks, like disinformation, bias, privacy invasion, and even mass layoffs. It’s a delicate balancing act that businesses need to get right.
  • Companies can’t just use AI, they have to be ready for it. That means adjusting their compliance policies and procedures to their specific AI risk profile, actively identifying and assessing those risks, and staying up-to-date on potential regulatory changes related to AI. As AI grows, the need for strong risk mitigation strategies before implementation becomes even more important.
  • The Caremark framework requires corporate boards to ensure that their companies comply with AI regulations. Recent cases, such as the Boeing safety oversight, demonstrate the severity of the consequences when boards fail to fulfill their responsibilities. As a result, boards must be proactive: ensure that board members have the technical expertise necessary, brief them on AI deployments, designate senior executives to be responsible for AI compliance, and ensure that there are clear channels for individuals to report issues.

 

KEY QUOTES

“Board members usually ask the Chief Information Security Officer or whoever is responsible for technology [at board meetings], ‘Are we doing okay?’ They don’t want to hear or get into all of the details, and then they move on. That model has got to change.”

 

“In this uncertain environment, stakeholders are quickly discovering the real and significant risks generated by artificial intelligence, and companies have to develop risk mitigation strategies before implementing artificial intelligence tools and solutions.”

 

“Board members should be briefed on existing and planned artificial intelligence deployments to support the company’s business and or support functions. In other words, they’ve got to be notified, brought along that this is going to be a new tool that we’re using, ‘Here are the risks, here are the mitigation techniques.’”

 

Resources:

Michael Volkov on LinkedIn | Twitter

The Volkov Law Group

Categories
Compliance and AI

Compliance and AI-Julie Myers Wood on Navigating the AI Compliance Landscape: Mitigating Risks

What is the role of Artificial Intelligence in compliance? What about Machine Learning? Are you using ChatGPT? These questions are but three of the many questions we will explore in this exciting new podcast series, Compliance and AI. Hosted by Tom Fox, the award-winning Voice of Compliance, this podcast will look at how AI will impact compliance programs into the next decade and beyond. If you want to find out why the future is now, join Tom Fox on this journey to the frontiers of AI.

Welcome back to another exciting episode of our podcast, where we delve into the fascinating world of compliance and artificial intelligence (AI). Today, we have the pleasure of hosting Julie Myers Wood, CEO of Guidepost Solutions. With her extensive background in law and government positions, Julie brings a wealth of knowledge and insights to our discussion on the challenges and considerations of incorporating AI into compliance programs.

As compliance professionals, we play a vital role in ensuring the safety and security of our businesses. The integration of AI into compliance programs presents both challenges and opportunities. By understanding the tools, risks, and solutions associated with AI, we can adapt to the changing landscape and make informed decisions.

Let’s embrace this exciting era of AI while staying vigilant and proactive. The world is changing, and compliance professionals need to stay up to date to ensure the safety and security of our businesses. Thank you, Julie Myers Wood, for sharing your valuable insights, and we look forward to more enlightening discussions in the future!

Remember, compliance professionals are the co-pilots of our businesses, guiding us through the complexities of the AI revolution. Let’s not wait too long between podcasts and continue this journey together!

Key Highlights

  • Key Considerations for Compliance and AI
  • Importance of Inventorying Tools and Managing Risks
  • AI and Intellectual Property Protection
  • Challenges of Implementing AI
  • AI and Compliance

 Resources

Julie Myers Wood on LinkedIn

Guidepost Solutions

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Blog

Julie Myers Wood on Navigating the AI Compliance Landscape: Mitigating Risks

I recently had the opportunity to visit with Julie Myers Wood, CEO at Guidepost Solutions. With her extensive background in law and government positions, Julie brings a wealth of knowledge and insights to our discussion on the challenges and considerations of incorporating AI into compliance programs. We took a deep dive into the intersection of compliance and artificial intelligence (AI).

With generative AI is coming at us with light speed, there are so many things a compliance professional to think about. Julie began with the first key thing is to take a high level perspective to step back and reflect on all the ways that AI can affect your company. You should ask several questions, including some of the following. What AI tools is the company using internally? What tools is the company using internally to help its operations or its capacity know about those tools? What is your company selling? Is your company selling tools that incorporate deep learning, generative AI or other sorts of machine learning?

Equally importantly what is the compliance part that each of your team is performing? What compliance tools are being used? Do you have individuals who are freelancing at your company trying to reduce their work using GPT or something else without telling you and maybe exposing some of the code? And finally, how are criminals using generative AI to get into your work? It all entails that , from a high-level perspective, what are various ways that AI can affect you.

Next it is important to think about is do you know what all these tools are that the company is using? You need to obtain an inventory of tools your employees are using. Compliance professionals need to have a comprehensive inventory of the tools being used within the company and fully comprehend their capabilities and limitations. This may not be easy, particularly if your organization is using a mix of homegrown tools as well as tools that are available for sale on the open market. Your compliance team must understand what are the tools that each part of the company is using because only then can you fully understand the privacy or other regulatory risks that may be involved.

In this inventory, you also need to understand who owns the software tools. When do they expire, how many seats to you have for your organization? Who owns the license keys and does the software legacy out?  This understanding is crucial for effectively managing compliance and mitigating potential risks. It is also a very good business practice.

Generative AI is rapidly advancing, and compliance professionals must stay informed and proactive in addressing its implications. Julie highlights the need to be aware of the risks related to generative AI, export compliance, and other potential problems. By staying updated on the latest developments, compliance professionals can adapt to the changing landscape and make informed decisions.

There are potential dangers of integrating AI into businesses and offers solutions to mitigate them. One key solution involves retraining or supplementing the training of employees. Companies need to educate their workforce on the rules of the road and provide a safe environment for exploring and experimenting with generative AI. Julie pointed to PwC’s billion-dollar investment in AI, including retraining and proprietary platforms, showcases the importance of investing in employee development. However, smaller companies may face challenges in investing in generative AI and effectively implementing it.

AI is revolutionizing compliance by enabling effective analysis and interpretation of large amounts of data. Compliance professionals are excited about the potential of AI for predictive analytics and identifying trends and patterns. However, choosing the right tools for compliance is crucial, as market winners and losers can impact success. A key for success for the compliance team is the need for collaboration between operations and compliance teams when considering the use of AI.

Clear policies defining what can and cannot be done with AI are essential to protect intellectual property and ensure compliance. But it is not simply policies and procedures, it is targeted and effective training, coupled with ongoing communications. All of this should be aimed at educating employees about the risks and consequences of using AI improperly is crucial. Compliance professionals should encourage caution when downloading AI tools from the web and carefully review terms and conditions to avoid unintended consequences.

As compliance professionals, we play a vital role in ensuring the safety and security of our businesses. The integration of AI into compliance programs presents both challenges and opportunities. By understanding the tools, risks, and solutions associated with AI, we can adapt to the changing landscape and make informed decisions.

For the full podcast with Julie Myers Wood, check out Compliance and AI here.

Categories
Daily Compliance News

Daily Compliance News: August 18, 2023 – The Fake Uber Account Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance brings to you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

·       Ukraine ABC lessons from Afghanistan. (NPR)

·       Paxton allegedly created fake Uber account to engage in corruption.  (Texas Tribune)

·       AI as big as the Internet? (Bloomberg)

·       Judge pauses mandatory religious training.  (Reuters)

Categories
Daily Compliance News

Daily Compliance News: August 9, 2023 – The $555MM Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance brings to you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

  • Federal judge says we need world ABC court. (WaPo)
  • Zoom and AI training. (BBC)
  • Judge order SW Airline lawyers to take religious training. (Reuters)
  • More messaging app non-compliance fines. (WSJ)
Categories
Blog

E-com Surveillance: A Proactive Approach

In today’s rapidly expanding digital realm, keeping up with regulatory requirements in E-com surveillance is more than just a necessity—it’s a game-changer. As the world grapples with the challenges brought by the COVID-19 pandemic, efforts in ensuring compliance have dramatically shifted, impacting both personal and professional spaces. This, friends, has become a defining factor in not just maintaining, but enhancing compliance and risk management. Let’s delve into how we can proactively monitor communications, adapt to evolving channels, and leverage technology for our advantage while ensuring data security in cloud-based platforms. Here are some key steps:

  • Establishing a Robust Compliance Program
  • Proactively Monitoring Communications in E-Com Surveillance
  • Adapting to Evolving Communication Channels
  • Deploying AI in Compliance Monitoring

1. Establishing a Robust Compliance Program

With the increasing reliance on e-commerce due to the ongoing global health crisis, keeping up with regulatory compliance has become more of a challenge than ever before. Enhanced surveillance within the e-commerce spectrum has emerged as a critical aspect of any robust compliance program. Companies must diligently monitor all communication transactions to identify any potential misconduct early on. With technology continuously evolving, companies are faced with more diverse sources of data and communication channels than before.

To counteract this, advancements in technology have enabled compliance professionals to monitor these various sources more efficiently and focus on high-risk areas.   With the proliferation of novel communication platforms, regulatory requirements have become more stringent, but also more complex to adhere to. AI has been instrumental in empowering compliance officers, allowing them to better concentrate their efforts. With its ability to filter and prioritize alerts based on risk levels, AI functionality is highly effective in optimizing the e-com surveillance process. Compliance functions must keep pace with the constant changes in the communication landscape, meaning that they need to be adaptable in capturing and recording all essential communications.   Organizations must understand the cruciality of establishing a strong compliance program that aligns with their communication platforms and e-commerce operations. By leveraging high-tech solutions, like AI and machine learning, companies can better monitor and manage risks from a proactive stance, while simultaneously obeying regulatory requirements.

 2. Proactively Monitoring Communications 

In the ever-expanding universe of e-commerce, staying ahead of illicit activities such as fraud, theft, and other misconduct is vital. Key to this is the implementation of effective e-commerce surveillance in every organization, large or small. This involves the proactive monitoring and analyzing of all company communications, from emails to chat messages, for any signs of inappropriate behavior. With the ongoing proliferation of communication channels — each one another avenue for potential exploitation — it’s a gargantuan task that might seem overwhelming. However, thanks to the wonder of technology, we now have the means to keep pace with this turbulent environment. Modern advancements have made it possible to capture a vast array of data sources, despite the varying nature and extent of these channels.

 3. Adapting to Evolving Communication Channels

The digital era has seen an explosion in communication channels. From emails, social media, chat platforms to video conferencing, employees now have myriad ways to communicate, both internally and externally. Consequently, e-com surveillance to monitor such communication pipelines and pin down potential misconduct becomes increasingly complex, yet more essential. Adapting to these evolving channels plays a key role in ensuring significant compliance and risk management.   There are unique challenges that emerge with this diversity of communication channels. First instance, coded language by employees and capturing diverse data sources are some of the hurdles organizations face.

However, technology solutions are evolving as fast as the communication landscape. Key amongst these solutions is the use of AI and machine learning models, which cut through the noise to help compliance officers focus on high-risk areas. Regulators such as the SEC in the US and the FCA in the UK expect companies capture, monitor, and record all communication channels. This means your business must keep up with people’s communication methods and ensure every dialogue is recorded.  Why is this adaptation important? In a nutshell, the vastness and ever-evolving nature of digital communication channels pose a risk. The risk lies in the prospect of misconduct going unnoticed, regulatory guidelines being flouted, and ultimately, organizations facing severe consequences.

Moreover, every new communication platform is an additional data source. Managing this increasing data effectively is crucial for any organization in the current digital age. Adapting to evolving communication channels is not just about managing current risks; it is also about equipping organizations with the necessary technological tools to capture, monitor, and manage potential risks that could emerge with future communication spheres. The progression ensures that there is no lag in surveillance and that organizations are always a step ahead in their risk management.

  1. Deploying AI

Artificial intelligence (AI) and machine learning are critical technological advancements enabling companies to monitor the manifold data sources efficiently. These technologies and perhaps others down the road, are a game-changers, empowering compliance officers to focus on high-risk areas and alerts, moving compliance process from a detect mode to prevent mode. By deploying these advanced methods may lead to more comprehensive data capture and monitoring, thereby promoting a seamless, integrated, and effective e-com surveillance mechanism. This is why the implementation of such a step is a necessity more than an option as we move forward in this data-driven age.

Why is this effective approach to e-com surveillance so crucial? Well, we live in an age of digital ecommerce and remote work after COVID-19, where communication channels have diversified and expanded beyond limits. To stay compliant with regulatory requirements, it is not enough just to keep an eye on traditional messaging. You must embrace these changes and adapt by efficiently monitoring all these channels. With the technology such as AI and machine learning, you can create defensible and explainable models that can precisely show why specific alerts were raised, and others weren’t. This approach is the key to adapting to this ever-evolving world and meeting regulatory expectations, thereby enhancing your compliance protocols in the long run.

The importance of maintaining compliance with regulatory requirements in e-commerce surveillance, especially during this ongoing pandemic, cannot be overstated. As compliance authorities, you have the power to make a significant impact on your organization’s risk management. Today, we’ve delved into the necessity of a strong compliance program, the significance of proactively monitoring communications, the need to adapt to new communication mediums, the benefits of utilizing AI in compliance monitoring, and the importance of securing data on cloud platforms. Each of these steps is instrumental in achieving the desired state of compliance. Let this motivate you to continue striving for excellence in all your compliance efforts. After all, your dedication to strengthening these practices is not just about meeting regulations – it’s about fostering trust and reliability in your organization.

Categories
2 Gurus Talk Compliance

2 Gurus Talk Compliance – Episode 10 – Ethical Remote Workers Edition

What happens when two top compliance commentators get together? They talk compliance of course. Join Tom Fox and Kristy Grant-Hart in 2 Gurus Talk Compliance as they discuss the latest compliance issues in this week’s episode!

Tom and Kristy consider the possibility of an international anti-bribery court, challenges in enforcing judgments against countries without strong anti-corruption laws, and the United States’ unlikely participation. The European Commission issued an adequacy decision regarding data transfers between the US and EU, resolving a long-standing issue, but privacy advocate Max Schrems plans to challenge its validity. The importance of on-site due diligence, and the value of on-site audits and cybersecurity disclosure were also explored. The benefits of remote work, global anti-corruption efforts, AI safeguards, and the dangers of zero tolerance policies were covered as well. The conversation provided insights into various compliance-related topics.

Highlights Include

·      World ABC Court

·      No DOJ control on Cognizant investigation.

·      SEC adopts Cyber disclosure rules.

·      Fight against corruption in Ukraine.

·      Goldilocks Compliance.

·      Data Privacy Framework Program Launches New Website Enabling U.S. Companies to Participate in Cross-Border Data Transfers

·      Site Visits: Sometimes the Best Due Diligence is Done on Foot

·      New Data Reveals that Remote Workers are Likely More Ethical than their Office Counterparts.

·      White House Says Amazon, Google, Meta, Microsoft Agree to AI Safeguards

·      Man Steals Vehicle, Crashes it into Building during Search for WiFi Connection

 Resources 

  1. WSJ Risk and Compliance Journal
  2. FCPA Blog
  3. Radical Compliance
  4. Dept. Of Commerce Press Release
  5. WSJ
  6. Conflicts of Interest Blog
  7. GAB
  8. Fast Company
  9. Fox 35 Orlando

Connect with Kristy Grant-Hart on LinkedIn

Spark Consulting

Tom 

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Blog

Implementing AI Technology for Data Driven Compliance

In the age of information, data is the lifeblood of any organization and now every compliance function. The integrity, accessibility, and security of this data are crucial for the effective functioning of businesses, making its compliance a top priority. As digital landscapes continue to evolve, maintaining compliance becomes increasingly complex. Enter AI technology. With the ability to streamline processes and make sense of unstructured data, AI technology is fast becoming the go-to solution for data compliance in organizations. This blog post will delve into how you can implement AI technology for data compliance within your organization, enhancing your compliance monitoring and proactively detecting potential risks.

Here are the key steps:

  • Understand Your Data Landscape: Get familiar with the types of data your organization handles – structured, semi-structured, and unstructured. Identify the sources of these data, like emails, documents, or messages.
  • Identify the Risks: Be aware of the potential risks associated with each type of data. These could include the disclosure of sensitive information, potential misconduct, or hidden threats, especially within unstructured data.
  • Implement AI Technology: Choose a technology partner that can assist in implementing AI capabilities for data compliance. The AI tools can help with data cleansing, reducing false positives and supporting the detection and even the prevention of compliance violations.
  • Establish Prevention Measures: Create a culture where prevention is given priority over detection.

 1. Understand Your Data Landscape

Stepping into the complexities of your data landscape is a pivotal component of maintaining an ironclad compliance program. As the pivotal backbone of any organization, data comes in all shapes and sizes –structured, semi-structured, and unstructured and from various sources such as emails, Word documents and text messages. This varying landscape can often be a figurative minefield, teeming with potential hazards. This makes it crucial to sift through this landscape, charting out the contours of your information trove, identifying relevant data and understanding how it interacts with your needs and obligations.

This first step sets the foundation for implementing a more efficient and effective compliance monitoring system within your organization.  Integrating AI technology, as emphasized, is a game-changing strategy for tackling the challenges presented by the data landscape. There is even a greater amount of data that compliance professionals must grapple with due to remote work and newer communication platforms. With AI-powered data cleansing, this unstructured data can be whittled down to pertinent, human-generated content, thus increasing efficiency and reducing instances of false positives.

By customizing models and policies to align with your organization’s risks, and by utilizing AI to review irrelevant content and target potential misconduct, organizations not only detect but also prevent instances of non-compliance.  The importance of understanding your data organization’s landscape extends beyond the detect prong of a best practices compliance program; it helps to predict where potential hazards might arise and informs the preventative measures you need to take. Implementing an AI-powered solution that is tailored to your unique organizational landscape means improved efficiency, minimizing the risk of non-compliance, and the potential for a more predictive and preventative approach to compliance management.

  1. Identify the Risks

Understanding unstructured data is not merely necessary for everyday operations but forms the backbone of compliance monitoring system. Recognizing that sifting through vast volumes of data can be onerous, the advent of AI-powered data cleansing capabilities will significantly streamline this process. It does this by eliminating duplicative content, junk, and non-human generated content from communications. This helps to focus on content that is risky and relevant for compliance teams, thereby minimizing the number of false positives.

Indeed this is the very foundation upon which a corporate compliance function can build out its strategy for risk mitigation. Once these risks are identified, compliance teams can take action to address them promptly, thereby creating a healthy culture of compliance within the organization. This kind of proactive approach serves as a deterrent to potential threats that could affect your company’s culture of doing business ethically and in compliance. Moreover, it sets clear expectations about the kinds of behaviors that are tolerated within the organization and those which are not through effective monitoring. All of this means that as a compliance professional you must work to address the challenges of unstructured data and identifying risks as an integral part of compliance monitoring going forward.

  1. Implementing the Tech

Navigating the massive waves of unstructured data within any organization can be a challenging task. With the advent of remote working conditions and a persistent reliance on diverse communication platforms, this data influx has exponentially grown, making the effective data management vital for a functioning compliance monitoring system. The collected data can involve communication records, report alerts, investigations, and training data.

As a recovering lawyer I can attest that the assimilation and interpretation of such fragmented information can be taxing on the compliance officers, primarily if they a traditional legal training. Not only is access to every corporate data silo mandated by the Department of Justice in the Evaluation of Corporate Compliance Programs, but the extraction of relevant insights from such a vast data pool can indeed be an uphill task.

  1. Establish Prevention Measures

When faced with unanticipated challenges like investigations, litigations, privacy compliance, and stringent government requests, compliance professionals must be equipped with a robust data analytics system. Your data solution or tool should provide an AI-powered data cleansing capability that eradicates duplicative content, junk, and non-human generated data. The key is for your solution to swiftly categorizes risky information which is then processed and referred to the compliance wing. But even more significantly,  a swift detection of misconduct and remediation process, will help to demonstrate a robust corporate culture can be promoted where accountability is the cornerstone.

Unarguably, the essence of navigating unstructured data demands exclusive attention. What might come across as a mere pile of unwarranted information might be a minefield of hidden risks and prominent disclosure threats that have the potential to jeopardize the whole organization. Our exclusive focus on the prevention of such risks is the precursor of a strong foundation where necessary expectations and behavioral norms are set. While some may argue about the necessity of such elaborate processes, their significance can only be understood by the way they shape our ability to identify hidden threats. These alone could potentially be the building blocks for a more secure, responsible, and ethically functioning organization.

As compliance officers, the journey towards mastering the realm of unstructured data and successfully implementing AI technology for data compliance can be a game-changer. This task, though complex, is pivotal in ensuring the integrity of your organization in today’s competitive and highly regulated business environment. The steps shared in this blog post are a roadmap to success, guiding you through understanding your data, identifying potential risks, and harnessing the power of AI. These measures will undoubtedly empower you to establish a culture of prevention in your organization. So, take a leap of faith, embrace AI technology, and watch how it revolutionizes your compliance monitoring and risk detection efforts.

Categories
Compliance and AI

Compliance and AI – Gordon Firemark on AI & ChatGPT for Podcasters

What is the role of Artificial Intelligence in compliance? What about Machine Learning? Are you using ChatGPT? These questions are but three of the many questions we will explore in this exciting new podcast series, Compliance and AI. Hosted by Tom Fox, the award-winning Voice of Compliance, this podcast will look at how AI will impact compliance programs into the next decade and beyond. If you want to find out why the future is now, join Tom Fox on this journey to the frontiers of AI.

AI is becoming increasingly prevalent in the creative industry, and with it comes a range of legal implications. Tom Fox and Gordon Firemark recently discussed the legal implications of AI and how it can be used to create deceptive and misleading content on their podcast, Absolutely!

Tom and Gordon believe that creatives should be fairly compensated for their work and that children should be taught about the business side of art. As Tom puts it, “If someone creates something of value, they should receive fair compensation for it.” They also advocate for exposing children to different ideas and lifestyles.

AI has the potential to create deep fake videos and audio, which can be difficult to distinguish from the real thing. This technology has the potential to be used to create deceptive and misleading content, which could have legal implications. For instance, the Federal Trade Commission has rules and other regulations that come into play when dealing with false advertising issues at a state level.

Additionally, intellectual property issues can arise when AI is used to summarize a book, as the courts and Copyright Office have stated that AI-generated material is not copyrightable. The recent Supreme Court ruling in the Andy Warhol Foundation versus Lynn Goldsmith case found that the use of the photograph was not transformative enough and was commercial in nature, thus the photographer had a valid copyright claim.

Tom and Gordon are both planning to attend Podcast Movement in Denver in late August. Gordon and another podcasting attorney named Lindsay Bowen will be presenting together on either a contract tear down or a mock negotiation of a deal for creatives. The panel will discuss common legal issues that need to be negotiated and worked out in those kinds of deals.

AI is a powerful and effective tool, but it is important to be aware of the potential legal implications that come with it. Creatives should be fairly compensated for their work and children should be exposed to different ideas and lifestyles. For more information on AI and the legal implications, make sure to check out Tom and Gordon’s presentation at Podcast Movement in Denver.

Key Highlights

·      AI and Chat GPT

·      AI and Copyright Issues

·      Fair Compensation for Creatives

·      Legal Issues in Art

Resources

Gordon Firemark on LinkedIn

Firemark Law Firm

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Blog

Auditing AI

The recent kerfuffle over an AI tool misinterpreting instructions to make a woman look more professional as making her look Caucasian has raised important questions about how to audit AI code to avoid undesirable outcomes. AI instruments are behaving in a fundamentally different way than most other types of apps and systems, and auditing AI code for implicit bias is not yet feasible. Matt Kelly recently wrote a blog post on this topic on Radical Compliance. I thought it would make a great podcast so this week’s episode of Compliance into the Weeds is dedicated to it. I also thought it was so important that I should blog about it as well.

It started when MIT grad student Rona Wang tested an AI tool called Playground AI to modify a photo of herself wearing an MIT T-shirt to look ‘more professional’. Rather than replacing the T-shirt she was wearing with more professional business attire to achieve a more professional look, the AI tool interpreted the instruction to make her look more professional as making her look Caucasian. Wang posted a before and after comparison of her photo on Twitter, which caused a big kerfuffle in the AI world about how this happened. The CEO of Playground AI responded to Wang on Twitter saying “We’re quite displeased with this and hope to solve it”.

We began with a discussion of the implications of implicit bias in AI code. Matt suggested that the code in the AI app may have been influenced by the disproportionate number of white people on LinkedIn. It may not be the fault of the AI program, but rather a result of structural bias and racism in the world. Matt believes that at this point, it is impossible for a human to audit the code of AI programs like Chat GPT, which evaluates data according to 1.76 trillion different parameters. Unfortunately, it is not possible to eliminate implicit bias in AI code by simply correcting a few parameters. Matt compared it to the difficulty of eliminating implicit bias in AI code to the difficulty of eliminating racism in the human brain.

AI can handle 1.7 trillion parameters of data, but it is difficult to audit for an ethical outcome. AI can misinterpret structural racism and inequities that exist in the world. AI can be used to filter out images that are not representative of the population as a whole. Auditing AI is difficult because there are few people who know how to design and audit these programs. AI decisions may have life and death consequences, but there is no way to audit them yet.

Companies using AI in the hiring process must consider whether they will scrap the AI tool and use another, use human HR people and recruiters, or have auditors and coders sit down and try and figure out the problem. Additionally, there is a risk of implicit bias when someone must define the pool of data that the AI is looking at. New York City has a regulation requiring employers to audit AI tools used in the hiring process at least annually, but this is only a small step towards addressing the issue of implicit bias in AI.

Auditing AI code for implicit bias is a complex process. AI tools used in the hiring process can range from keyword matching to Chat GPT. While it is important for companies to audit their AI tools, it is also important to consider the data that is being used to train the AI. If the data is biased, the AI will be biased as well. To ensure that AI tools are not biased, companies should consider using a diverse set of data and conducting regular audits of the AI tools.

The Wang incident over an AI tool misinterpreting instructions to make a woman look more professional as making her look Caucasian is a reminder of the importance of auditing AI code to avoid undesirable outcomes. AI instruments are behaving in a fundamentally different way than most other types of apps and systems, and auditing AI code for implicit bias is not yet feasible. Companies using AI in the hiring process must consider whether they will scrap the AI tool and use another, use human HR people and recruiters, or have auditors and coders sit down and try and figure out the problem.

Finally, there is a risk of implicit bias when someone has to define the pool of data that the AI is looking at. New York City has a regulation requiring employers to audit AI tools used in the hiring process at least annually, but this is only a small step towards addressing the issue of implicit bias in AI. To ensure that AI tools are not biased, companies should consider using a diverse set of data and conducting regular audits of the AI tools.

For the complete discussion of this issue check out this week’s episode of Compliance into the Weeds.