Categories
Daily Compliance News

Daily Compliance News: September 14, 2023 – The What Could Go Wrong Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy morning coffee, and listen to the Daily Compliance News. All from the Compliance Podcast Network. Each day, we consider four stories from the business world: compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Stories we are following in today’s edition of Daily Compliance News:

  • Head of China’s top insurer jailed for corruption. (BBC)
  • Musk headed to arbitration against Wachtell. (Reuters)
  • PE plunges into NIL. (FT)
  • Tech leaders school Congress on AI. (NYT)
Categories
10 For 10

10 For 10: Top Compliance Stories For the Week Ending September 9, 2023

Welcome to 10 For 10, the podcast which brings you the week’s Top 10 compliance stories in one podcast each week. Tom Fox, the Voice of Compliance brings to you, the compliance professional, the compliance stories you need to be aware of to end your busy week. Sit back, and in 10 minutes hear about the stories every compliance professional should be aware of from the prior week. Every Saturday, 10 For 10 highlights the most important news, insights, and analysis for the compliance professional, all curated by the Voice of Compliance, Tom Fox. Get your weekly filling of compliance stories with 10 for 10, a podcast produced by the Compliance Podcast Network.

·       Insufficient cyber plan = FCA violation.  (DOJ Press Release)

·       Roger Ng banned for life.  (YaHooFinance)

·       FASB adopts crypto accounting rules. (WSJ)

·       Ken Paxton and slow creep of corruption. (Texas Tribune)

·       Spanish Women’s National team coach fired.  (ESPN)

·       Ramaswamy’s claims of FDA corruption disavowed by company he founded. (Reuters)

·       FIFA suspends head of Spanish football. (FT)

·       Using AI to improve workplace safety. (WSJ)

·       DOJ to go after Oligarch’s facilitators. (WSJ)

You can check out the Daily Compliance News for four curated compliance and ethics related stories each day, here.

Connect with Tom 

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Everything Compliance

Everything Compliance – Episode 123, The Spanish Kiss Edition

Welcome to the only roundtable podcast in compliance as we celebrate our second century of shows. In this episode, we have the quartet of Jay Rosen, Jonathan Armstrong, Matt Kelly and Karen Woody, with Tom Fox hosting. We conclude with our always popular and fan fav Shout Outs and Rants.

1. Matt Kelly looks at the new SEC requirement for companies to improve their risk assessments and attendant processes. He rants about the US Federal Courts not allowing television cameras and says we need the Trump trials televised in federal courts.

2. Karen Woody reviews Opinion Release 23-01. She shouts out to the Barbie movie.

3. Tom Fox shouts out to Megan Rapinoe for great professional career and her social activism while a member of the USWNT.

4. Jay Rosen looks at the imbroglio surrounding the Spanish National football team after its Women’s World Cup win. Rosen shouts out SOCAR, the South Orange County Compliance and Ethics Roundtable.

5. Jonathan Armstrong considers the NATS air traffic debacle and operational resilience. He shouts out Sgt. Graham Saville who lost his life helping a person in distress.

The members of the Everything Compliance are:

•       Jay Rosen– Jay is Vice President, Business Development Corporate Monitoring at Affiliated Monitors. Rosen can be reached at JRosen@affiliatedmonitors.com

•       Karen Woody – One of the top academic experts on the SEC. Woody can be reached at kwoody@wlu.edu

•       Matt Kelly – Founder and CEO of Radical Compliance. Kelly can be reached at mkelly@radicalcompliance.com

•       Jonathan Armstrong –is our UK colleague, who is an experienced data privacy/data protection lawyer with Cordery in London. Armstrong can be reached at jonathan.armstrong@corderycompliance.com

•       Jonathan Marks can be reached at jtmarks@gmail.com.

The host and producer, ranter (and sometime panelist) of Everything Compliance is Tom Fox the Voice of Compliance. He can be reached at tfox@tfoxlaw.com. Everything Compliance is a part of the Compliance Podcast Network.

Categories
Blog

AI and GDPR

Artificial Intelligence (AI) has revolutionized various industries, but with great power comes great responsibility. Regulators in the European Union (EU) are taking a proactive approach to address compliance and data protection issues surrounding AI and generative AI. Recent cases, such as Google’s AI tool, Bard, being temporarily suspended in the EU, have highlighted the urgent need for regulation in this rapidly evolving field. I recently had the opportunity to visit with GDPR maven Jonathan Armstrong on this topic. In this blog post, we will delve into our conversations about some of the key concerns raised about data and privacy in generative AI, the importance of transparency and consent, and the potential legal and financial implications for organizations that fail to address these concerns.

One of the key issues in the AI landscape is obtaining informed consent from users. The recent scrutiny faced by video conferencing platform Zoom serves as a stark reminder of the importance of transparency and consent practices. While there has been no official investigation into Zoom’s compliance with informed consent requirements, the company has retracted its initial statements and is likely considering how to obtain consent from users.

It is essential to recognize that obtaining consent extends not only to those who host a Zoom call but also to those who are invited to join the call. Unfortunately, there has been no on-screen warning about consent when using Zoom, leaving users in the dark about the data practices involved. This lack of transparency can lead to significant legal and financial penalties, as over 70% of GDPR fines involve a lack of transparency by the data controller.

Generative AI heavily relies on large pools of data for training, which raises concerns about copyright infringement and the processing of individuals’ data without consent. For instance, Zoom’s plan to use recorded Zoom calls to train AI tools may violate GDPR’s requirement of informed consent. Similarly, Getty Images has expressed concerns about its copyrighted images being used without consent to train AI models.

Websites often explicitly prohibit scraping data for training AI models, emphasizing the need for organizations to respect copyright laws and privacy regulations. Regulators are rightfully concerned about AI processing individuals’ data without consent or knowledge, as well as the potential for inaccurate data processing. Accuracy is a key principle of GDPR, and organizations using AI must conduct thorough data protection impact assessments to ensure compliance.

Several recent cases demonstrate the regulatory focus on AI compliance and transparency. In Italy, rideshare and food delivery applications faced investigations and suspensions for their AI practices. Spain has examined the use of AI in recruitment processes, highlighting the importance of transparency in the selection process. Google’s Bard case, similar to the Facebook dating case, faced temporary suspension in the EU due to the lack of a mandatory data protection impact assessment (DPIA).

It is concerning that many big tech providers fail to engage with regulators or produce the required DPIA for their AI applications. This lack of compliance and transparency poses significant risks for organizations, not just in terms of financial penalties but also potential litigation risks in the hiring process.

To navigate the compliance and data protection challenges posed by AI, organizations must prioritize transparency, fairness, and lawful processing of data. Conducting a data protection impact assessment is crucial, especially when AI is used in Know Your Customer (KYC), due diligence, and job application processes. If risks cannot be resolved or remediated internally, it is advisable to consult regulators and include timings for such consultations in project timelines.

For individuals, it is essential to be aware of the terms and conditions associated with AI applications. In the United States, informed consent is often buried within lengthy terms and conditions, leading to a lack of understanding and awareness. By being vigilant and informed, individuals can better protect their privacy and data rights.

As AI continues to transform industries, compliance and data protection must remain at the forefront of technological advancements. Regulators in the EU are actively addressing the challenges posed by AI and generative AI, emphasizing the need for transparency, consent, and compliance with GDPR obligations. Organizations and individuals must prioritize data protection impact assessments, engage with regulators when necessary, and stay informed about the terms and conditions associated with AI applications. By doing so, we can harness the power of AI while safeguarding our privacy and ensuring ethical practices in this rapidly evolving field.

Categories
Data Driven Compliance

Data Driven Compliance: Julie Myers Wood – Using AI for Data Driven Compliance

Are you struggling to keep up with the ever-changing compliance programs in your business? Look no further than the award-winning Data Driven Compliance podcast, hosted by Tom Fox, is a podcast featuring an in-depth conversation around the uses of data and data analytics in compliance programs. Data Driven Compliance is back with another exciting episode The intersection of law, compliance, and data is becoming increasingly important in the world of cross-border transactions and mergers and acquisitions.

In this podcast episode, Tom Fox and Julie Myers Wood, CEO at Guidepost Solutions take a deep dive into the intersection of compliance and generative AI and how this intersection will lead to more data driven compliance.  Wood emphasizes the importance of understanding the various ways AI can impact a company, including internal use, sales, compliance tools, freelancers, and criminal exploitation. Compliance teams need to have a comprehensive inventory of the tools being used and understand the capabilities and limitations of AI to ensure compliance and mitigate risks.

They discussed the need for companies to be aware of the potential risks associated with AI and have clear policies and procedures in place to protect intellectual property. He also discusses the importance of employee retraining and thoughtful decision-making when integrating AI into business practices. Overall, the podcast provides valuable insights into the challenges and considerations of incorporating AI into compliance programs, emphasizing the need for compliance professionals to adapt and stay informed.

Highlights Include

·      Key Considerations for Compliance and AI

·      Importance of Inventorying Tools and Managing Risks

·      AI and Intellectual Property Protection

·      Challenges of Implementing AI

·      AI and Compliance

Resources:

Julie Myers Wood on LinkedIn

Guidepost Solutions

 Tom Fox 

Connect with me on the following sites:

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Daily Compliance News

Daily Compliance News: September 5, 2023 – The Pig-Butchering Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance brings to you compliance related stories to start your day. Sit back, enjoy a cup of morning coffee and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day we consider four stories from the business world, compliance, ethics, risk management, leadership or general interest for the compliance professional.

  • US sanctions Russian company for selling rockets to North Korea. (WSJ)
  • Pig-butchering and crypto. (WSJ)
  • Using AI to improve workplace safety. (WSJ)
  • Do you need to know? (WSJ)
Categories
Compliance and AI

Compliance and AI – Jonathan Armstrong on Unleashing Generative AI: Privacy, Copyright, and Compliance

What is the role of Artificial Intelligence in compliance? What about Machine Learning? Are you using ChatGPT? These questions are but three of the many questions we will explore in this exciting new podcast series, Compliance and AI. Hosted by Tom Fox, the award-winning Voice of Compliance, this podcast will look at how AI will impact compliance programs into the next decade and beyond. If you want to find out why the future is now, join Tom Fox on this journey to the frontiers of AI.

Welcome back to another exciting episode of our podcast, where we delve into the fascinating world of compliance and artificial intelligence (AI). Today I am joined by Jonathan Armstrong from Cordery Compliance to discuss how regulators in the EU are looking at AI.

Regulators in the EU are taking action to address the use of artificial intelligence (AI) and generative AI. A recent case involving Google’s AI tool, Bard, being temporarily suspended in the EU highlights the need for regulation and compliance in this rapidly evolving field. Concerns are raised about data and privacy, as generative AI uses large amounts of data, potentially infringing copyright and processing individuals’ data without consent. It is crucial for organizations to conduct data protection impact assessments and consider GDPR obligations. Transparency and consent are also key, with Zoom’s data practices being questioned in terms of transparency and obtaining user consent. The conversation emphasizes the potential legal and financial consequences organizations face for non-compliance.

Remember, compliance professionals are the co-pilots of our businesses, guiding us through the complexities of the AI revolution. Let’s not wait too long between podcasts and continue this journey together!

Key Highlights

·      Concerns with Bard

·      Regulators’ Actions on AI

·      Concerns over Data and Privacy in Generative AI

·      Transparency and Consent in Zoom’s Data Practices

 Resources

For more information on the issues raised in this podcast, check out the Cordery Compliance, News Section. For more information on Cordery Compliance, go their website here. Also check out the GDPR Navigator, one of the top resources for GDPR Compliance by clicking here.

Connect with Jonathan Armstrong

●      Twitter

●      LinkedIn

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Corruption, Crime and Compliance

Board Oversight and Monitoring of AI Risks

As companies rapidly adopt artificial intelligence (AI), it becomes paramount to have robust governance frameworks in place. Not only can AI bring about vast business benefits, but it also carries significant risks—such as spreading disinformation, racial discrimination, and potential privacy invasions. In this episode of Corruption, Crime and Compliance, Michael Volkov dives deep into the urgent need for corporate boards to monitor, address, and incorporate AI into their compliance programs, and the many facets that this entails.

You’ll hear Michael talk about:

  • AI is spreading like wildfire across industries, and with it comes a whole new set of risks. Many boards don’t fully understand these risks. It’s important to make sure that boards are educated about the potential and pitfalls of AI, and that they actively oversee the risks. This includes understanding their obligations under Caremark, which requires them to exercise diligent oversight and monitoring.
  • AI is a tantalizing prospect for businesses: faster, more accurate processes that can revolutionize operations. But with great power comes great responsibility. AI also comes with risks, like disinformation, bias, privacy invasion, and even mass layoffs. It’s a delicate balancing act that businesses need to get right.
  • Companies can’t just use AI, they have to be ready for it. That means adjusting their compliance policies and procedures to their specific AI risk profile, actively identifying and assessing those risks, and staying up-to-date on potential regulatory changes related to AI. As AI grows, the need for strong risk mitigation strategies before implementation becomes even more important.
  • The Caremark framework requires corporate boards to ensure that their companies comply with AI regulations. Recent cases, such as the Boeing safety oversight, demonstrate the severity of the consequences when boards fail to fulfill their responsibilities. As a result, boards must be proactive: ensure that board members have the technical expertise necessary, brief them on AI deployments, designate senior executives to be responsible for AI compliance, and ensure that there are clear channels for individuals to report issues.

 

KEY QUOTES

“Board members usually ask the Chief Information Security Officer or whoever is responsible for technology [at board meetings], ‘Are we doing okay?’ They don’t want to hear or get into all of the details, and then they move on. That model has got to change.”

 

“In this uncertain environment, stakeholders are quickly discovering the real and significant risks generated by artificial intelligence, and companies have to develop risk mitigation strategies before implementing artificial intelligence tools and solutions.”

 

“Board members should be briefed on existing and planned artificial intelligence deployments to support the company’s business and or support functions. In other words, they’ve got to be notified, brought along that this is going to be a new tool that we’re using, ‘Here are the risks, here are the mitigation techniques.’”

 

Resources:

Michael Volkov on LinkedIn | Twitter

The Volkov Law Group

Categories
Compliance and AI

Compliance and AI-Julie Myers Wood on Navigating the AI Compliance Landscape: Mitigating Risks

What is the role of Artificial Intelligence in compliance? What about Machine Learning? Are you using ChatGPT? These questions are but three of the many questions we will explore in this exciting new podcast series, Compliance and AI. Hosted by Tom Fox, the award-winning Voice of Compliance, this podcast will look at how AI will impact compliance programs into the next decade and beyond. If you want to find out why the future is now, join Tom Fox on this journey to the frontiers of AI.

Welcome back to another exciting episode of our podcast, where we delve into the fascinating world of compliance and artificial intelligence (AI). Today, we have the pleasure of hosting Julie Myers Wood, CEO of Guidepost Solutions. With her extensive background in law and government positions, Julie brings a wealth of knowledge and insights to our discussion on the challenges and considerations of incorporating AI into compliance programs.

As compliance professionals, we play a vital role in ensuring the safety and security of our businesses. The integration of AI into compliance programs presents both challenges and opportunities. By understanding the tools, risks, and solutions associated with AI, we can adapt to the changing landscape and make informed decisions.

Let’s embrace this exciting era of AI while staying vigilant and proactive. The world is changing, and compliance professionals need to stay up to date to ensure the safety and security of our businesses. Thank you, Julie Myers Wood, for sharing your valuable insights, and we look forward to more enlightening discussions in the future!

Remember, compliance professionals are the co-pilots of our businesses, guiding us through the complexities of the AI revolution. Let’s not wait too long between podcasts and continue this journey together!

Key Highlights

  • Key Considerations for Compliance and AI
  • Importance of Inventorying Tools and Managing Risks
  • AI and Intellectual Property Protection
  • Challenges of Implementing AI
  • AI and Compliance

 Resources

Julie Myers Wood on LinkedIn

Guidepost Solutions

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Blog

Julie Myers Wood on Navigating the AI Compliance Landscape: Mitigating Risks

I recently had the opportunity to visit with Julie Myers Wood, CEO at Guidepost Solutions. With her extensive background in law and government positions, Julie brings a wealth of knowledge and insights to our discussion on the challenges and considerations of incorporating AI into compliance programs. We took a deep dive into the intersection of compliance and artificial intelligence (AI).

With generative AI is coming at us with light speed, there are so many things a compliance professional to think about. Julie began with the first key thing is to take a high level perspective to step back and reflect on all the ways that AI can affect your company. You should ask several questions, including some of the following. What AI tools is the company using internally? What tools is the company using internally to help its operations or its capacity know about those tools? What is your company selling? Is your company selling tools that incorporate deep learning, generative AI or other sorts of machine learning?

Equally importantly what is the compliance part that each of your team is performing? What compliance tools are being used? Do you have individuals who are freelancing at your company trying to reduce their work using GPT or something else without telling you and maybe exposing some of the code? And finally, how are criminals using generative AI to get into your work? It all entails that , from a high-level perspective, what are various ways that AI can affect you.

Next it is important to think about is do you know what all these tools are that the company is using? You need to obtain an inventory of tools your employees are using. Compliance professionals need to have a comprehensive inventory of the tools being used within the company and fully comprehend their capabilities and limitations. This may not be easy, particularly if your organization is using a mix of homegrown tools as well as tools that are available for sale on the open market. Your compliance team must understand what are the tools that each part of the company is using because only then can you fully understand the privacy or other regulatory risks that may be involved.

In this inventory, you also need to understand who owns the software tools. When do they expire, how many seats to you have for your organization? Who owns the license keys and does the software legacy out?  This understanding is crucial for effectively managing compliance and mitigating potential risks. It is also a very good business practice.

Generative AI is rapidly advancing, and compliance professionals must stay informed and proactive in addressing its implications. Julie highlights the need to be aware of the risks related to generative AI, export compliance, and other potential problems. By staying updated on the latest developments, compliance professionals can adapt to the changing landscape and make informed decisions.

There are potential dangers of integrating AI into businesses and offers solutions to mitigate them. One key solution involves retraining or supplementing the training of employees. Companies need to educate their workforce on the rules of the road and provide a safe environment for exploring and experimenting with generative AI. Julie pointed to PwC’s billion-dollar investment in AI, including retraining and proprietary platforms, showcases the importance of investing in employee development. However, smaller companies may face challenges in investing in generative AI and effectively implementing it.

AI is revolutionizing compliance by enabling effective analysis and interpretation of large amounts of data. Compliance professionals are excited about the potential of AI for predictive analytics and identifying trends and patterns. However, choosing the right tools for compliance is crucial, as market winners and losers can impact success. A key for success for the compliance team is the need for collaboration between operations and compliance teams when considering the use of AI.

Clear policies defining what can and cannot be done with AI are essential to protect intellectual property and ensure compliance. But it is not simply policies and procedures, it is targeted and effective training, coupled with ongoing communications. All of this should be aimed at educating employees about the risks and consequences of using AI improperly is crucial. Compliance professionals should encourage caution when downloading AI tools from the web and carefully review terms and conditions to avoid unintended consequences.

As compliance professionals, we play a vital role in ensuring the safety and security of our businesses. The integration of AI into compliance programs presents both challenges and opportunities. By understanding the tools, risks, and solutions associated with AI, we can adapt to the changing landscape and make informed decisions.

For the full podcast with Julie Myers Wood, check out Compliance and AI here.