Categories
Blog

AI and GDPR

Artificial Intelligence (AI) has revolutionized various industries, but with great power comes great responsibility. Regulators in the European Union (EU) are taking a proactive approach to address compliance and data protection issues surrounding AI and generative AI. Recent cases, such as Google’s AI tool, Bard, being temporarily suspended in the EU, have highlighted the urgent need for regulation in this rapidly evolving field. I recently had the opportunity to visit with GDPR maven Jonathan Armstrong on this topic. In this blog post, we will delve into our conversations about some of the key concerns raised about data and privacy in generative AI, the importance of transparency and consent, and the potential legal and financial implications for organizations that fail to address these concerns.

One of the key issues in the AI landscape is obtaining informed consent from users. The recent scrutiny faced by video conferencing platform Zoom serves as a stark reminder of the importance of transparency and consent practices. While there has been no official investigation into Zoom’s compliance with informed consent requirements, the company has retracted its initial statements and is likely considering how to obtain consent from users.

It is essential to recognize that obtaining consent extends not only to those who host a Zoom call but also to those who are invited to join the call. Unfortunately, there has been no on-screen warning about consent when using Zoom, leaving users in the dark about the data practices involved. This lack of transparency can lead to significant legal and financial penalties, as over 70% of GDPR fines involve a lack of transparency by the data controller.

Generative AI heavily relies on large pools of data for training, which raises concerns about copyright infringement and the processing of individuals’ data without consent. For instance, Zoom’s plan to use recorded Zoom calls to train AI tools may violate GDPR’s requirement of informed consent. Similarly, Getty Images has expressed concerns about its copyrighted images being used without consent to train AI models.

Websites often explicitly prohibit scraping data for training AI models, emphasizing the need for organizations to respect copyright laws and privacy regulations. Regulators are rightfully concerned about AI processing individuals’ data without consent or knowledge, as well as the potential for inaccurate data processing. Accuracy is a key principle of GDPR, and organizations using AI must conduct thorough data protection impact assessments to ensure compliance.

Several recent cases demonstrate the regulatory focus on AI compliance and transparency. In Italy, rideshare and food delivery applications faced investigations and suspensions for their AI practices. Spain has examined the use of AI in recruitment processes, highlighting the importance of transparency in the selection process. Google’s Bard case, similar to the Facebook dating case, faced temporary suspension in the EU due to the lack of a mandatory data protection impact assessment (DPIA).

It is concerning that many big tech providers fail to engage with regulators or produce the required DPIA for their AI applications. This lack of compliance and transparency poses significant risks for organizations, not just in terms of financial penalties but also potential litigation risks in the hiring process.

To navigate the compliance and data protection challenges posed by AI, organizations must prioritize transparency, fairness, and lawful processing of data. Conducting a data protection impact assessment is crucial, especially when AI is used in Know Your Customer (KYC), due diligence, and job application processes. If risks cannot be resolved or remediated internally, it is advisable to consult regulators and include timings for such consultations in project timelines.

For individuals, it is essential to be aware of the terms and conditions associated with AI applications. In the United States, informed consent is often buried within lengthy terms and conditions, leading to a lack of understanding and awareness. By being vigilant and informed, individuals can better protect their privacy and data rights.

As AI continues to transform industries, compliance and data protection must remain at the forefront of technological advancements. Regulators in the EU are actively addressing the challenges posed by AI and generative AI, emphasizing the need for transparency, consent, and compliance with GDPR obligations. Organizations and individuals must prioritize data protection impact assessments, engage with regulators when necessary, and stay informed about the terms and conditions associated with AI applications. By doing so, we can harness the power of AI while safeguarding our privacy and ensuring ethical practices in this rapidly evolving field.

Categories
Data Driven Compliance

Data Driven Compliance: Julie Myers Wood – Using AI for Data Driven Compliance

Are you struggling to keep up with the ever-changing compliance programs in your business? Look no further than the award-winning Data Driven Compliance podcast, hosted by Tom Fox, is a podcast featuring an in-depth conversation around the uses of data and data analytics in compliance programs. Data Driven Compliance is back with another exciting episode The intersection of law, compliance, and data is becoming increasingly important in the world of cross-border transactions and mergers and acquisitions.

In this podcast episode, Tom Fox and Julie Myers Wood, CEO at Guidepost Solutions take a deep dive into the intersection of compliance and generative AI and how this intersection will lead to more data driven compliance.  Wood emphasizes the importance of understanding the various ways AI can impact a company, including internal use, sales, compliance tools, freelancers, and criminal exploitation. Compliance teams need to have a comprehensive inventory of the tools being used and understand the capabilities and limitations of AI to ensure compliance and mitigate risks.

They discussed the need for companies to be aware of the potential risks associated with AI and have clear policies and procedures in place to protect intellectual property. He also discusses the importance of employee retraining and thoughtful decision-making when integrating AI into business practices. Overall, the podcast provides valuable insights into the challenges and considerations of incorporating AI into compliance programs, emphasizing the need for compliance professionals to adapt and stay informed.

Highlights Include

·      Key Considerations for Compliance and AI

·      Importance of Inventorying Tools and Managing Risks

·      AI and Intellectual Property Protection

·      Challenges of Implementing AI

·      AI and Compliance

Resources:

Julie Myers Wood on LinkedIn

Guidepost Solutions

 Tom Fox 

Connect with me on the following sites:

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Daily Compliance News

Daily Compliance News: September 5, 2023 – The Pig-Butchering Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance brings to you compliance related stories to start your day. Sit back, enjoy a cup of morning coffee and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day we consider four stories from the business world, compliance, ethics, risk management, leadership or general interest for the compliance professional.

  • US sanctions Russian company for selling rockets to North Korea. (WSJ)
  • Pig-butchering and crypto. (WSJ)
  • Using AI to improve workplace safety. (WSJ)
  • Do you need to know? (WSJ)
Categories
Compliance and AI

Compliance and AI – Jonathan Armstrong on Unleashing Generative AI: Privacy, Copyright, and Compliance

What is the role of Artificial Intelligence in compliance? What about Machine Learning? Are you using ChatGPT? These questions are but three of the many questions we will explore in this exciting new podcast series, Compliance and AI. Hosted by Tom Fox, the award-winning Voice of Compliance, this podcast will look at how AI will impact compliance programs into the next decade and beyond. If you want to find out why the future is now, join Tom Fox on this journey to the frontiers of AI.

Welcome back to another exciting episode of our podcast, where we delve into the fascinating world of compliance and artificial intelligence (AI). Today I am joined by Jonathan Armstrong from Cordery Compliance to discuss how regulators in the EU are looking at AI.

Regulators in the EU are taking action to address the use of artificial intelligence (AI) and generative AI. A recent case involving Google’s AI tool, Bard, being temporarily suspended in the EU highlights the need for regulation and compliance in this rapidly evolving field. Concerns are raised about data and privacy, as generative AI uses large amounts of data, potentially infringing copyright and processing individuals’ data without consent. It is crucial for organizations to conduct data protection impact assessments and consider GDPR obligations. Transparency and consent are also key, with Zoom’s data practices being questioned in terms of transparency and obtaining user consent. The conversation emphasizes the potential legal and financial consequences organizations face for non-compliance.

Remember, compliance professionals are the co-pilots of our businesses, guiding us through the complexities of the AI revolution. Let’s not wait too long between podcasts and continue this journey together!

Key Highlights

·      Concerns with Bard

·      Regulators’ Actions on AI

·      Concerns over Data and Privacy in Generative AI

·      Transparency and Consent in Zoom’s Data Practices

 Resources

For more information on the issues raised in this podcast, check out the Cordery Compliance, News Section. For more information on Cordery Compliance, go their website here. Also check out the GDPR Navigator, one of the top resources for GDPR Compliance by clicking here.

Connect with Jonathan Armstrong

●      Twitter

●      LinkedIn

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Corruption, Crime and Compliance

Board Oversight and Monitoring of AI Risks

As companies rapidly adopt artificial intelligence (AI), it becomes paramount to have robust governance frameworks in place. Not only can AI bring about vast business benefits, but it also carries significant risks—such as spreading disinformation, racial discrimination, and potential privacy invasions. In this episode of Corruption, Crime and Compliance, Michael Volkov dives deep into the urgent need for corporate boards to monitor, address, and incorporate AI into their compliance programs, and the many facets that this entails.

You’ll hear Michael talk about:

  • AI is spreading like wildfire across industries, and with it comes a whole new set of risks. Many boards don’t fully understand these risks. It’s important to make sure that boards are educated about the potential and pitfalls of AI, and that they actively oversee the risks. This includes understanding their obligations under Caremark, which requires them to exercise diligent oversight and monitoring.
  • AI is a tantalizing prospect for businesses: faster, more accurate processes that can revolutionize operations. But with great power comes great responsibility. AI also comes with risks, like disinformation, bias, privacy invasion, and even mass layoffs. It’s a delicate balancing act that businesses need to get right.
  • Companies can’t just use AI, they have to be ready for it. That means adjusting their compliance policies and procedures to their specific AI risk profile, actively identifying and assessing those risks, and staying up-to-date on potential regulatory changes related to AI. As AI grows, the need for strong risk mitigation strategies before implementation becomes even more important.
  • The Caremark framework requires corporate boards to ensure that their companies comply with AI regulations. Recent cases, such as the Boeing safety oversight, demonstrate the severity of the consequences when boards fail to fulfill their responsibilities. As a result, boards must be proactive: ensure that board members have the technical expertise necessary, brief them on AI deployments, designate senior executives to be responsible for AI compliance, and ensure that there are clear channels for individuals to report issues.

 

KEY QUOTES

“Board members usually ask the Chief Information Security Officer or whoever is responsible for technology [at board meetings], ‘Are we doing okay?’ They don’t want to hear or get into all of the details, and then they move on. That model has got to change.”

 

“In this uncertain environment, stakeholders are quickly discovering the real and significant risks generated by artificial intelligence, and companies have to develop risk mitigation strategies before implementing artificial intelligence tools and solutions.”

 

“Board members should be briefed on existing and planned artificial intelligence deployments to support the company’s business and or support functions. In other words, they’ve got to be notified, brought along that this is going to be a new tool that we’re using, ‘Here are the risks, here are the mitigation techniques.’”

 

Resources:

Michael Volkov on LinkedIn | Twitter

The Volkov Law Group

Categories
Compliance and AI

Compliance and AI-Julie Myers Wood on Navigating the AI Compliance Landscape: Mitigating Risks

What is the role of Artificial Intelligence in compliance? What about Machine Learning? Are you using ChatGPT? These questions are but three of the many questions we will explore in this exciting new podcast series, Compliance and AI. Hosted by Tom Fox, the award-winning Voice of Compliance, this podcast will look at how AI will impact compliance programs into the next decade and beyond. If you want to find out why the future is now, join Tom Fox on this journey to the frontiers of AI.

Welcome back to another exciting episode of our podcast, where we delve into the fascinating world of compliance and artificial intelligence (AI). Today, we have the pleasure of hosting Julie Myers Wood, CEO of Guidepost Solutions. With her extensive background in law and government positions, Julie brings a wealth of knowledge and insights to our discussion on the challenges and considerations of incorporating AI into compliance programs.

As compliance professionals, we play a vital role in ensuring the safety and security of our businesses. The integration of AI into compliance programs presents both challenges and opportunities. By understanding the tools, risks, and solutions associated with AI, we can adapt to the changing landscape and make informed decisions.

Let’s embrace this exciting era of AI while staying vigilant and proactive. The world is changing, and compliance professionals need to stay up to date to ensure the safety and security of our businesses. Thank you, Julie Myers Wood, for sharing your valuable insights, and we look forward to more enlightening discussions in the future!

Remember, compliance professionals are the co-pilots of our businesses, guiding us through the complexities of the AI revolution. Let’s not wait too long between podcasts and continue this journey together!

Key Highlights

  • Key Considerations for Compliance and AI
  • Importance of Inventorying Tools and Managing Risks
  • AI and Intellectual Property Protection
  • Challenges of Implementing AI
  • AI and Compliance

 Resources

Julie Myers Wood on LinkedIn

Guidepost Solutions

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Blog

Julie Myers Wood on Navigating the AI Compliance Landscape: Mitigating Risks

I recently had the opportunity to visit with Julie Myers Wood, CEO at Guidepost Solutions. With her extensive background in law and government positions, Julie brings a wealth of knowledge and insights to our discussion on the challenges and considerations of incorporating AI into compliance programs. We took a deep dive into the intersection of compliance and artificial intelligence (AI).

With generative AI is coming at us with light speed, there are so many things a compliance professional to think about. Julie began with the first key thing is to take a high level perspective to step back and reflect on all the ways that AI can affect your company. You should ask several questions, including some of the following. What AI tools is the company using internally? What tools is the company using internally to help its operations or its capacity know about those tools? What is your company selling? Is your company selling tools that incorporate deep learning, generative AI or other sorts of machine learning?

Equally importantly what is the compliance part that each of your team is performing? What compliance tools are being used? Do you have individuals who are freelancing at your company trying to reduce their work using GPT or something else without telling you and maybe exposing some of the code? And finally, how are criminals using generative AI to get into your work? It all entails that , from a high-level perspective, what are various ways that AI can affect you.

Next it is important to think about is do you know what all these tools are that the company is using? You need to obtain an inventory of tools your employees are using. Compliance professionals need to have a comprehensive inventory of the tools being used within the company and fully comprehend their capabilities and limitations. This may not be easy, particularly if your organization is using a mix of homegrown tools as well as tools that are available for sale on the open market. Your compliance team must understand what are the tools that each part of the company is using because only then can you fully understand the privacy or other regulatory risks that may be involved.

In this inventory, you also need to understand who owns the software tools. When do they expire, how many seats to you have for your organization? Who owns the license keys and does the software legacy out?  This understanding is crucial for effectively managing compliance and mitigating potential risks. It is also a very good business practice.

Generative AI is rapidly advancing, and compliance professionals must stay informed and proactive in addressing its implications. Julie highlights the need to be aware of the risks related to generative AI, export compliance, and other potential problems. By staying updated on the latest developments, compliance professionals can adapt to the changing landscape and make informed decisions.

There are potential dangers of integrating AI into businesses and offers solutions to mitigate them. One key solution involves retraining or supplementing the training of employees. Companies need to educate their workforce on the rules of the road and provide a safe environment for exploring and experimenting with generative AI. Julie pointed to PwC’s billion-dollar investment in AI, including retraining and proprietary platforms, showcases the importance of investing in employee development. However, smaller companies may face challenges in investing in generative AI and effectively implementing it.

AI is revolutionizing compliance by enabling effective analysis and interpretation of large amounts of data. Compliance professionals are excited about the potential of AI for predictive analytics and identifying trends and patterns. However, choosing the right tools for compliance is crucial, as market winners and losers can impact success. A key for success for the compliance team is the need for collaboration between operations and compliance teams when considering the use of AI.

Clear policies defining what can and cannot be done with AI are essential to protect intellectual property and ensure compliance. But it is not simply policies and procedures, it is targeted and effective training, coupled with ongoing communications. All of this should be aimed at educating employees about the risks and consequences of using AI improperly is crucial. Compliance professionals should encourage caution when downloading AI tools from the web and carefully review terms and conditions to avoid unintended consequences.

As compliance professionals, we play a vital role in ensuring the safety and security of our businesses. The integration of AI into compliance programs presents both challenges and opportunities. By understanding the tools, risks, and solutions associated with AI, we can adapt to the changing landscape and make informed decisions.

For the full podcast with Julie Myers Wood, check out Compliance and AI here.

Categories
Daily Compliance News

Daily Compliance News: August 18, 2023 – The Fake Uber Account Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance brings to you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

·       Ukraine ABC lessons from Afghanistan. (NPR)

·       Paxton allegedly created fake Uber account to engage in corruption.  (Texas Tribune)

·       AI as big as the Internet? (Bloomberg)

·       Judge pauses mandatory religious training.  (Reuters)

Categories
Daily Compliance News

Daily Compliance News: August 9, 2023 – The $555MM Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance brings to you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

  • Federal judge says we need world ABC court. (WaPo)
  • Zoom and AI training. (BBC)
  • Judge order SW Airline lawyers to take religious training. (Reuters)
  • More messaging app non-compliance fines. (WSJ)
Categories
Blog

E-com Surveillance: A Proactive Approach

In today’s rapidly expanding digital realm, keeping up with regulatory requirements in E-com surveillance is more than just a necessity—it’s a game-changer. As the world grapples with the challenges brought by the COVID-19 pandemic, efforts in ensuring compliance have dramatically shifted, impacting both personal and professional spaces. This, friends, has become a defining factor in not just maintaining, but enhancing compliance and risk management. Let’s delve into how we can proactively monitor communications, adapt to evolving channels, and leverage technology for our advantage while ensuring data security in cloud-based platforms. Here are some key steps:

  • Establishing a Robust Compliance Program
  • Proactively Monitoring Communications in E-Com Surveillance
  • Adapting to Evolving Communication Channels
  • Deploying AI in Compliance Monitoring

1. Establishing a Robust Compliance Program

With the increasing reliance on e-commerce due to the ongoing global health crisis, keeping up with regulatory compliance has become more of a challenge than ever before. Enhanced surveillance within the e-commerce spectrum has emerged as a critical aspect of any robust compliance program. Companies must diligently monitor all communication transactions to identify any potential misconduct early on. With technology continuously evolving, companies are faced with more diverse sources of data and communication channels than before.

To counteract this, advancements in technology have enabled compliance professionals to monitor these various sources more efficiently and focus on high-risk areas.   With the proliferation of novel communication platforms, regulatory requirements have become more stringent, but also more complex to adhere to. AI has been instrumental in empowering compliance officers, allowing them to better concentrate their efforts. With its ability to filter and prioritize alerts based on risk levels, AI functionality is highly effective in optimizing the e-com surveillance process. Compliance functions must keep pace with the constant changes in the communication landscape, meaning that they need to be adaptable in capturing and recording all essential communications.   Organizations must understand the cruciality of establishing a strong compliance program that aligns with their communication platforms and e-commerce operations. By leveraging high-tech solutions, like AI and machine learning, companies can better monitor and manage risks from a proactive stance, while simultaneously obeying regulatory requirements.

 2. Proactively Monitoring Communications 

In the ever-expanding universe of e-commerce, staying ahead of illicit activities such as fraud, theft, and other misconduct is vital. Key to this is the implementation of effective e-commerce surveillance in every organization, large or small. This involves the proactive monitoring and analyzing of all company communications, from emails to chat messages, for any signs of inappropriate behavior. With the ongoing proliferation of communication channels — each one another avenue for potential exploitation — it’s a gargantuan task that might seem overwhelming. However, thanks to the wonder of technology, we now have the means to keep pace with this turbulent environment. Modern advancements have made it possible to capture a vast array of data sources, despite the varying nature and extent of these channels.

 3. Adapting to Evolving Communication Channels

The digital era has seen an explosion in communication channels. From emails, social media, chat platforms to video conferencing, employees now have myriad ways to communicate, both internally and externally. Consequently, e-com surveillance to monitor such communication pipelines and pin down potential misconduct becomes increasingly complex, yet more essential. Adapting to these evolving channels plays a key role in ensuring significant compliance and risk management.   There are unique challenges that emerge with this diversity of communication channels. First instance, coded language by employees and capturing diverse data sources are some of the hurdles organizations face.

However, technology solutions are evolving as fast as the communication landscape. Key amongst these solutions is the use of AI and machine learning models, which cut through the noise to help compliance officers focus on high-risk areas. Regulators such as the SEC in the US and the FCA in the UK expect companies capture, monitor, and record all communication channels. This means your business must keep up with people’s communication methods and ensure every dialogue is recorded.  Why is this adaptation important? In a nutshell, the vastness and ever-evolving nature of digital communication channels pose a risk. The risk lies in the prospect of misconduct going unnoticed, regulatory guidelines being flouted, and ultimately, organizations facing severe consequences.

Moreover, every new communication platform is an additional data source. Managing this increasing data effectively is crucial for any organization in the current digital age. Adapting to evolving communication channels is not just about managing current risks; it is also about equipping organizations with the necessary technological tools to capture, monitor, and manage potential risks that could emerge with future communication spheres. The progression ensures that there is no lag in surveillance and that organizations are always a step ahead in their risk management.

  1. Deploying AI

Artificial intelligence (AI) and machine learning are critical technological advancements enabling companies to monitor the manifold data sources efficiently. These technologies and perhaps others down the road, are a game-changers, empowering compliance officers to focus on high-risk areas and alerts, moving compliance process from a detect mode to prevent mode. By deploying these advanced methods may lead to more comprehensive data capture and monitoring, thereby promoting a seamless, integrated, and effective e-com surveillance mechanism. This is why the implementation of such a step is a necessity more than an option as we move forward in this data-driven age.

Why is this effective approach to e-com surveillance so crucial? Well, we live in an age of digital ecommerce and remote work after COVID-19, where communication channels have diversified and expanded beyond limits. To stay compliant with regulatory requirements, it is not enough just to keep an eye on traditional messaging. You must embrace these changes and adapt by efficiently monitoring all these channels. With the technology such as AI and machine learning, you can create defensible and explainable models that can precisely show why specific alerts were raised, and others weren’t. This approach is the key to adapting to this ever-evolving world and meeting regulatory expectations, thereby enhancing your compliance protocols in the long run.

The importance of maintaining compliance with regulatory requirements in e-commerce surveillance, especially during this ongoing pandemic, cannot be overstated. As compliance authorities, you have the power to make a significant impact on your organization’s risk management. Today, we’ve delved into the necessity of a strong compliance program, the significance of proactively monitoring communications, the need to adapt to new communication mediums, the benefits of utilizing AI in compliance monitoring, and the importance of securing data on cloud platforms. Each of these steps is instrumental in achieving the desired state of compliance. Let this motivate you to continue striving for excellence in all your compliance efforts. After all, your dedication to strengthening these practices is not just about meeting regulations – it’s about fostering trust and reliability in your organization.