Categories
TechLaw10

TechLaw10 Predictions for 2024

In this edition of TechLaw10, Jonathan Armstrong talks to Attorney and Professor Eric Sinrod from his home in California. They have their traditional look at the year ahead. What will 2024 bring to the world of technology law?

The topics include:

  • Will technology result in reduced legal costs?
  • What are some of the likely AI implications in 2024? Will competition/anti-trust laws impact the roll-out of AI? Will we be more concerned about the use of AI by nation-states?
  • What do the cases of Joe Sullivan & Carlos Abarca tell us about personal liability?
  • Could the Horizon investigation & other scandals lead to greater regulation of lawyers regarding tech issues?
  • The formation of the New York State Bar Association Task Force on AI
  • What does 2024 hold for data protection, data privacy, & GDPR?
  • What are the issues with copyright & AI, & how might they be resolved?
  • What is operational resilience, & how might that get more traction in 2024?
  • What are the likely changes in social media regulation?
  • What are the likely developments in ransomware in 2024?

You can listen to earlier TechLaw10 audio podcasts with Eric and Jonathan at https://www.duanemorris.com/site/techlaw10.html

Connect with the Compliance Podcast Network at:

LinkedIn: https://www.linkedin.com/company/compliance-podcast-network/
Facebook: https://www.facebook.com/compliancepodcastnetwork/
YouTube: https://www.youtube.com/@CompliancePodcastNetwork
Twitter: https://twitter.com/tfoxlaw
Instagram: https://www.instagram.com/voiceofcompliance/
Website: https://compliancepodcastnetwork.net/

Categories
TechLaw10

Is AI a Closed Shop & Can AI Influence Elections?

In this edition of TechLaw10, Jonathan Armstrong talks with Attorney and Professor Eric Sinrod to discuss more AI-related issues, including the dominance of a few technology businesses in critical aspects of the AI infrastructure and concerns that AI might be used to influence elections, including the US Presidential elections this year.

Jonathan looks at the role of competition/anti-trust law in AI. He highlights the Competition Appeal Tribunal proceedings against Facebook for market dominance, which could lead to a claim of £3bn on behalf of 45 million Facebook users. He then looks at issues with Common Crawl and the use of their data repositories to train AI. What are the implications when 80% of Generative AI relies on data from just one organization? Could using Common Crawl data for different purposes lead to issues?

Eric looks at fears of using AI in the forthcoming US elections. Jonathan mentions proposed legislation from Assemblyman Clyde Vanel.

There is more information on the Facebook case here: https://www.bbc.com/news/technology-68309670

You can listen to earlier TechLaw10 audio podcasts with Eric and Jonathan at https://www.duanemorris.com/site/techlaw10.html

Connect with the Compliance Podcast Network at:

LinkedIn: https://www.linkedin.com/company/compliance-podcast-network/
Facebook: https://www.facebook.com/compliancepodcastnetwork/
YouTube: https://www.youtube.com/@CompliancePodcastNetwork
Twitter: https://twitter.com/tfoxlaw
Instagram: https://www.instagram.com/voiceofcompliance/
Website: https://compliancepodcastnetwork.net/

Categories
TechLaw10

Eric Sinrod & Jonathan Armstrong on AI & ChatGPT

In this edition of TechLaw10, Jonathan Armstrong talks with Attorney and Professor Eric Sinrod. They examine the legal issues surrounding AI, the rise of chatbot ChatGPT, and the legal and ethical issues it raises.

The topics include:

    • A Discussion on Ethics & Technology
    • The Need for AI Not to Discriminate
    • The Use of AI in Recruitment
    • The Use of Chatbots for Harm
    • AI & Copyright Issues
    • Does ChatGPT lie, and if so, what are the consequences?
    • Political Bias with Chatbots
    • A 1996 Case that could limit the Lawful Activity of some chatbots
    • How AI Harvesting Data Could be a Criminal offense under Computer Misuse Legislation
    • How data subject rights under GDPR may impact AI

You can listen to earlier TechLaw10 audio podcasts with Eric and Jonathan at https://www.duanemorris.com/site/techlaw10.html

Connect with the Compliance Podcast Network at:

LinkedIn: https://www.linkedin.com/company/compliance-podcast-network/
Facebook: https://www.facebook.com/compliancepodcastnetwork/
YouTube: https://www.youtube.com/@CompliancePodcastNetwork
Twitter: https://twitter.com/tfoxlaw
Instagram: https://www.instagram.com/voiceofcompliance/
Website: https://compliancepodcastnetwork.net/

Categories
Blog

Solar Winds Under GDPR: Corporate Responsibility and Risks in Data Protection

The General Data Protection Regulation (GDPR) has significantly changed how organizations handle data protection and privacy. It emphasizes the importance of transparency and honesty in disclosing data breaches and vulnerabilities. In a recent episode of the podcast Life with GDPR, Tom Fox and Jonathan Armstrong from Cordery Compliance discussed the topic of corporate responsibility and risks in data protection, with a particular focus on the SolarWinds case.

To recap, in late 2023, the SEC filed a lawsuit against SolarWinds Corp and its CISO, Tim Brown, following the 2020 data breach, bringing the issue of executive liability in cybersecurity disclosures to the forefront. The lawsuit raised important questions about the personal liability of senior executives for inaccurate risk disclosures and has potential implications for other industries under US securities law.

The 2020 breach, orchestrated by Russian hackers, targeted SolarWinds’ software, Orion, and exposed highly sensitive information. The hackers gained access to SolarWinds and planted spyware in the Orion program. SolarWinds then distributed an update to its corporate customers, unknowingly spreading the Russian spyware. This allowed the hackers to access the highest levels of the US government and major corporations.

The SEC’s lawsuit against SolarWinds and Tim Brown focused on the poor disclosures about the company’s information security throughout 2018, 2019, and 2020. While SolarWinds publicly claimed to have good cybersecurity, internal communications revealed that employees were aware of the company’s cybersecurity issues and considered them a mess. This discrepancy between internal knowledge and external disclosures formed the basis of the SEC’s allegations.

The SEC complaint alleged that SolarWinds’ public statements about its cybersecurity practices and risks were at odds with its internal assessments, including a 2018 presentation prepared by a company engineer and shared internally, including with Brown, that SolarWinds’ remote access set-up was “not very secure” and that someone exploiting the vulnerability “can do whatever without us detecting it until it’s too late,” which could lead to “major reputation and financial loss” for SolarWinds. Similarly, as alleged in the SEC’s complaint, 2018 and 2019 presentations by Brown stated, respectively, that the “current state of security leaves us in a very vulnerable state for our critical assets” and that “[a]ccess and privilege to critical systems/data is inappropriate.”

Beyond this SEC enforcement action, there were other implications as well. One key takeaway from the episode is the pressure on corporate leaders, including CISOs, Data Protection Officers, and Compliance Officers, to disclose data breaches promptly. While GDPR offers some protection to Data Protection Officers, they are not entirely exempt from liabilities. The SolarWinds case serves as a reminder of the need for specific and timely disclosure of breaches and the importance of addressing system vulnerabilities.

The risks associated with data breaches are not limited to regulatory fines. Litigation risks are a significant concern for organizations, with shareholders and whistleblowers potentially seeking legal action. The episode highlights the importance of transparency and not misrepresenting information to regulators. Misrepresentations can lead to severe consequences for individuals in positions of responsibility within corporations.

Budget constraints can also hinder the timely fixing of vulnerabilities, ultimately leading to breaches. Organizations need to take proactive measures to identify and address vulnerabilities promptly. Realistic resource assessments are crucial to ensuring that adequate resources are allocated to data protection efforts. Additionally, having adequate insurance protection, such as Directors and Officers (D&O) insurance, can help protect individuals in positions of responsibility from potential liabilities.

The episode also emphasizes the need for organizations to consider the impact on their stock exchange filings when deciding whether to disclose a data breach. The decision to admit a violation of a stock exchange can be challenging and depends on factors such as materiality. Organizations need to assign a dedicated team to consider these factors, mainly when engaged in transactions like mergers and acquisitions or fundraising.

Transparency and honesty are key principles in data protection and privacy. Audit reports and investigation findings must be acted upon promptly to address vulnerabilities. Emails and other forms of communication can serve as evidence in legal proceedings, highlighting the importance of careful communication within organizations.

The potential for litigation is significant in data breach cases. Shareholders may seek legal action if they believe the value of their stock has been affected. Whistleblowers, incentivized by various jurisdictions, may also come forward with information. This highlights the need for organizations to maintain a culture of transparency and integrity and for individuals to review their remuneration packages to avoid conflicts of interest.

In conclusion, GDPR, corporate responsibility, and risks in data protection are interconnected. Organizations must prioritize transparency, honesty, and timely disclosure of breaches and vulnerabilities. Proactive measures, realistic resource assessments, and adequate insurance protection are crucial to mitigating risks. By considering the impact on stock exchange filings and maintaining a culture of integrity, organizations can navigate the challenges associated with data protection and privacy in the GDPR era.

Categories
Life with GDPR

Life With GDPR: WhatsApp Breach: Hospital’s GDPR Failures Exposed

Tom Fox and Jonathan Armstrong, renowned experts in cyber security, co-host the award-winning Life with GDPR. The recent controversy surrounding Nigel Farage’s banking situation highlights the risks and compliance challenges faced by the banking industry in relation to data protection. In this episode, Tom and Jonathan discuss a data breach in a Scottish hospital during the COVID-19 pandemic.

The breach occurred when hospital staff shared patient details on WhatsApp, raising concerns about GDPR compliance. The hospital informed the ICO about the breach but chose not to notify affected patients, highlighting the need for appropriate advice and support when making such decisions. The conversation also explores communication challenges in internal investigations and the privacy and security risks of platforms like WhatsApp. It emphasizes the importance of organizations adapting to the preferences of digital native employees and conducting data protection impact assessments. The podcast also highlights the importance of effective policies, training, and proactive phishing training to prevent cyber-attacks and protect sensitive information.

 

Key Takeaways:

  • Data breach in Scottish hospital
  • The Challenges of Communication in Internal Investigations
  • Importance of Policies and Training
  • Phishing Training Effectiveness

Resources

For more information on the issues raised in this podcast, check out the Cordery Compliance News Section. For more information on Cordery Compliance, go to their website here. Also, check out the GDPR Navigator, one of the top resources for GDPR Compliance, by clicking here.

Connect with Tom Fox

Connect with Jonathan Armstrong

Categories
Blog

AI and GDPR

Artificial Intelligence (AI) has revolutionized various industries, but with great power comes great responsibility. Regulators in the European Union (EU) are taking a proactive approach to address compliance and data protection issues surrounding AI and generative AI. Recent cases, such as Google’s AI tool, Bard, being temporarily suspended in the EU, have highlighted the urgent need for regulation in this rapidly evolving field. I recently had the opportunity to visit with GDPR maven Jonathan Armstrong on this topic. In this blog post, we will delve into our conversations about some of the key concerns raised about data and privacy in generative AI, the importance of transparency and consent, and the potential legal and financial implications for organizations that fail to address these concerns.

One of the key issues in the AI landscape is obtaining informed consent from users. The recent scrutiny faced by video conferencing platform Zoom serves as a stark reminder of the importance of transparency and consent practices. While there has been no official investigation into Zoom’s compliance with informed consent requirements, the company has retracted its initial statements and is likely considering how to obtain consent from users.

It is essential to recognize that obtaining consent extends not only to those who host a Zoom call but also to those who are invited to join the call. Unfortunately, there has been no on-screen warning about consent when using Zoom, leaving users in the dark about the data practices involved. This lack of transparency can lead to significant legal and financial penalties, as over 70% of GDPR fines involve a lack of transparency by the data controller.

Generative AI heavily relies on large pools of data for training, which raises concerns about copyright infringement and the processing of individuals’ data without consent. For instance, Zoom’s plan to use recorded Zoom calls to train AI tools may violate GDPR’s requirement of informed consent. Similarly, Getty Images has expressed concerns about its copyrighted images being used without consent to train AI models.

Websites often explicitly prohibit scraping data for training AI models, emphasizing the need for organizations to respect copyright laws and privacy regulations. Regulators are rightfully concerned about AI processing individuals’ data without consent or knowledge, as well as the potential for inaccurate data processing. Accuracy is a key principle of GDPR, and organizations using AI must conduct thorough data protection impact assessments to ensure compliance.

Several recent cases demonstrate the regulatory focus on AI compliance and transparency. In Italy, rideshare and food delivery applications faced investigations and suspensions for their AI practices. Spain has examined the use of AI in recruitment processes, highlighting the importance of transparency in the selection process. Google’s Bard case, similar to the Facebook dating case, faced temporary suspension in the EU due to the lack of a mandatory data protection impact assessment (DPIA).

It is concerning that many big tech providers fail to engage with regulators or produce the required DPIA for their AI applications. This lack of compliance and transparency poses significant risks for organizations, not just in terms of financial penalties but also potential litigation risks in the hiring process.

To navigate the compliance and data protection challenges posed by AI, organizations must prioritize transparency, fairness, and lawful processing of data. Conducting a data protection impact assessment is crucial, especially when AI is used in Know Your Customer (KYC), due diligence, and job application processes. If risks cannot be resolved or remediated internally, it is advisable to consult regulators and include timings for such consultations in project timelines.

For individuals, it is essential to be aware of the terms and conditions associated with AI applications. In the United States, informed consent is often buried within lengthy terms and conditions, leading to a lack of understanding and awareness. By being vigilant and informed, individuals can better protect their privacy and data rights.

As AI continues to transform industries, compliance and data protection must remain at the forefront of technological advancements. Regulators in the EU are actively addressing the challenges posed by AI and generative AI, emphasizing the need for transparency, consent, and compliance with GDPR obligations. Organizations and individuals must prioritize data protection impact assessments, engage with regulators when necessary, and stay informed about the terms and conditions associated with AI applications. By doing so, we can harness the power of AI while safeguarding our privacy and ensuring ethical practices in this rapidly evolving field.

Categories
Blog

The Importance of Effective Policies and Training in Data Protection: Lessons from a Scottish Hospital Breach

I recently had the chance to visit with Jonathan Armstrong on a recent data breach case that occurred in the health service provider NHS Lanarkshire (Scotland) during the COVID-19 pandemic. This breach serves as a stark reminder of the challenges organizations face in maintaining data protection and compliance, especially when it comes to communication platforms like WhatsApp. In this blog post we will explore the lessons learned from this incident and discuss practical advice for organizations to ensure robust data protection measures.

Background

According to the Cordery Compliance Client Alert on the matter, over a two-year period between 2020 and 2022, 26 staff at NHS Lanarkshire had access to a WhatsApp group where there were a minimum of 533 entries that included patient names. The information included 215 phone numbers, 96 with dates of birth and 28 included addresses. 15 images, 3 videos, and 4 screenshots were also shared, which included personal data of patients and clinical information, which is a “special category” health data under both EU and UK law. Other data to the WhatsApp group was also added in error. Other communications were also identified where the staff in question had used WhatsApp.

WhatsApp was not approved by NHS Lanarkshire for processing personal data of patients.  The use of WhatsApp was an approach adopted by the staff apparently without organizational knowledge. It was used by the staff as a substitute for communications that would have taken place in the clinical office but did not do so after staff reduced office attendance due to the COVID-19 pandemic. No Data Protection Impact Assessment was in place and no risk assessment relating to personal data processing was completed concerning WhatsApp, as WhatsApp was not approved by NHS Lanarkshire for the sharing of personal data relating to patients. NHS Lanarkshire undertook an internal investigation and reported this matter to the ICO.

ICO Holding

The UK ICO determined that NHS Lanarkshire did not have the appropriate policies, clear guidance and processes in place when WhatsApp was made available to download. Additionally,  there were a number of infringements of UK GDPR, not the least being not implementing appropriate technical and organizational measures (TOMs) to ensure the security of the personal data involved, as a consequence of which personal data was shared via an unauthorized means and an inappropriate disclosure occurred. There was also a failure to report this matter, as a data breach, to the ICO in time.

Armstrong noted that ICO recommended that NHS Lanarkshire should take action to ensure their compliance with data protection law, including:

  1. Considering implementing a secure clinical image transfer system, as part of NHS Lanarkshire’s exploration regarding the storage of images and videos within a care setting;
  2. Before deploying new apps, consideration of the risks relating to personal data and including the requirement to assess and mitigate these risks in any approval process;
  3. Ensuring that explicit communications, instructions or guidance are issued to employees on their data protection responsibilities when new apps are deployed;
  4. Reviewing all organizational policies and procedures relevant to this matter and amending them where appropriate; and,
  5. Ensuring that all staff are aware of their responsibilities to report personal data breaches internally without delay to the relevant team.

Armstrong concluded that “In light of the remedial steps and mitigating factors the ICO issued an official reprimand – a fine has not yet been imposed. The ICO also asked NHS Lanarkshire to provide an update of actions taken within six months of the reprimand being issued.”

Discussion

This case highlights the challenges organizations face when it comes to communication during internal investigations. In many instances, the most interesting documents are not found in emails, as one organization discovered. Employees often turn to alternative platforms like WhatsApp to avoid leaving a paper trail. However, it is crucial to understand that these platforms may not provide the expected privacy and security.

While platforms like WhatsApp may seem secure, they still share data with big tech companies, raising concerns about privacy. Organizations must adapt to the preferences of digital-native employees who may find email restrictive and opt for alternative communication methods. However, this adaptation should be done consciously, ensuring that policies and procedures are in place to protect sensitive information. Armstrong emphasizes the importance of revisiting emergency measures implemented during the pandemic. As remote work continues, organizations must conduct thorough data protection impact assessments to ensure compliance across all communication platforms and measures.

As with all types of compliance, setting policies and procedures is just the first step. It is essential to communicate and educate employees on these policies to ensure their understanding and compliance. Annual online training sessions are not enough; organizations should provide engaging training that goes beyond passive learning. In addition to targeted and effective training there must be ongoing communications provided to employees. Armstrong also related on the ineffectiveness of off-the-shelf online phishing training. Waiting for an incident to occur and then providing training is not enough to prevent people from clicking on malicious links. Organizations should focus on providing better training before incidents happen, rather than trying to enhance training afterwards.

The next step is monitoring as compliance with policies and procedures should be actively monitored. Technical solutions are available to help companies track compliance, but it’s crucial to involve individuals at all levels of the organization when designing these policies. Additionally, a balanced approach is needed, where employees are recognized for their service but also held accountable for policy breaches. The days of solely relying on punishment for enforcement are gone.

The data breach in the Scottish hospital serves as a wake-up call for organizations to prioritize data protection and compliance. Communication challenges during internal investigations, privacy concerns associated with alternative platforms, and the need for effective policies and training are crucial areas to address. By conducting regular data protection impact assessments, providing engaging training, and ensuring buy-in from employees, organizations can strengthen their defense against cyber threats and protect sensitive information. Always remember that compliance is an ongoing process, and continuous evaluation and improvement are necessary to adapt to the evolving digital landscape. Finally stay vigilant and proactive in safeguarding data privacy and protection.

Categories
Compliance and AI

Compliance and AI – Jonathan Armstrong on Unleashing Generative AI: Privacy, Copyright, and Compliance

What is the role of Artificial Intelligence in compliance? What about Machine Learning? Are you using ChatGPT? These questions are but three of the many questions we will explore in this exciting new podcast series, Compliance and AI. Hosted by Tom Fox, the award-winning Voice of Compliance, this podcast will look at how AI will impact compliance programs into the next decade and beyond. If you want to find out why the future is now, join Tom Fox on this journey to the frontiers of AI.

Welcome back to another exciting episode of our podcast, where we delve into the fascinating world of compliance and artificial intelligence (AI). Today I am joined by Jonathan Armstrong from Cordery Compliance to discuss how regulators in the EU are looking at AI.

Regulators in the EU are taking action to address the use of artificial intelligence (AI) and generative AI. A recent case involving Google’s AI tool, Bard, being temporarily suspended in the EU highlights the need for regulation and compliance in this rapidly evolving field. Concerns are raised about data and privacy, as generative AI uses large amounts of data, potentially infringing copyright and processing individuals’ data without consent. It is crucial for organizations to conduct data protection impact assessments and consider GDPR obligations. Transparency and consent are also key, with Zoom’s data practices being questioned in terms of transparency and obtaining user consent. The conversation emphasizes the potential legal and financial consequences organizations face for non-compliance.

Remember, compliance professionals are the co-pilots of our businesses, guiding us through the complexities of the AI revolution. Let’s not wait too long between podcasts and continue this journey together!

Key Highlights

·      Concerns with Bard

·      Regulators’ Actions on AI

·      Concerns over Data and Privacy in Generative AI

·      Transparency and Consent in Zoom’s Data Practices

 Resources

For more information on the issues raised in this podcast, check out the Cordery Compliance, News Section. For more information on Cordery Compliance, go their website here. Also check out the GDPR Navigator, one of the top resources for GDPR Compliance by clicking here.

Connect with Jonathan Armstrong

●      Twitter

●      LinkedIn

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Life with GDPR

Life With GDPR: Banking’s Data Dilemma – Farage’s Account Closure & the Risks of Data Breach

Tom Fox and Jonathan Armstrong, renowned expert in cyber security, co-host the award-winning Life with GDPR. The recent controversy surrounding Nigel Farage’s banking situation highlights the risks and compliance challenges faced by the banking industry in relation to data protection.

In this episode, Tom and Jonathan discuss the closure of Farage’s bank account with Coutts, a high-end bank owned by NatWest, and the potential data breach that ensued. They discuss the risks of internal emails being exposed through subject access requests (SARs) and emphasize the importance of caution in email communication. The conversation also explores the cost and consequences of non-compliance with GDPR obligations, particularly in relation to SARs. The potential legal implications for banks that violate their own policies or delete data that should be provided in response to a SAR are highlighted. Overall, the episode underscores the need for banks to prioritize data protection, compliance, and proper decision-making in the financial industry.

 Key Takeaways:

·      Nigel Farage’s Banking Controversy

·      Data Protection Risks in Banking

·      The Cost and Consequences of Subject Access Requests

·      Serious concerns about data protection and access to banking

 Resources

For more information on the issues raised in this podcast, check out the Cordery Compliance, News Section. For more information on Cordery Compliance, go their website here. Also check out the GDPR Navigator, one of the top resources for GDPR Compliance by clicking here.

Connect with Tom Fox

●      LinkedIn

Connect with Jonathan Armstrong

●      Twitter

●      LinkedIn

Categories
Life with GDPR

Life With GDPR: Class Action Update

Tom Fox and Jonathan Armstrong, renowned experts in cyber security, co-host the award-winning Life with GDPR. Join them in this episode as they discuss the recent court decision in the Austrian case and its implications on GDPR claims. Discover the guidelines for GDPR damage compensation, assessment of damages, liability provisions, and how businesses can make themselves more robust to avoid such claims. They also delve into the importance of acting quickly in the event of a breach and insurers’ sophistication in cyberattack policies. Tune in to learn more, and check out the article on the quarterly compliance website. Don’t miss out on their engaging conversation and valuable insights!

 

Key Takeaways:

  • Understanding GDPR compensation claims
  • Insurance Claims and Breach Response Strategy
  • Cyber insurance is becoming more selective in writing cover

Notable Quotes:

“I would say when you have a title like that, you get the attention of many class action lawyers.”

“Not every infringement of GDPR automatically gives rise to compensation.”

“The right to compensation under GDPR needs 3 things. Firstly, an infringement of GDPR; secondly, material damage resulting; and thirdly, a causal link between the damage and the infringement.”

“If you haven’t got the right team in place, Even on New Year’s Day or Christmas day, Easter or Passover or, you know, during fasting, then that’s your fault, not ours, and regulators are not forgiving.”

 Resources:

For more information on the issues raised in this podcast, check out the Cordery Compliance News Section. For more information on Cordery Compliance, go to their website here. Also, check out the GDPR Navigator, one of the top resources for GDPR Compliance, by clicking here.

Connect with Tom Fox

Connect with Jonathan Armstrong