Categories
Blog

Data Governance, Privacy, and Model Integrity: The Control Foundation of AI Governance

Artificial intelligence may look like a technology story on the surface, but beneath that surface lies a governance reality every board and Chief Compliance Officer must confront. AI systems are only as sound as the data that feeds them, the controls that govern them, and the integrity of the outputs they generate. When data governance is weak, privacy obligations are poorly managed, or model integrity is assumed rather than tested, AI risk can move quickly from a technical flaw to enterprise exposure.

In the prior blog posts in this series, I examined the foundational questions of AI governance: board oversight and accountability, and the danger of strategy outrunning governance. Today, I want to turn to a third issue that sits at the core of every credible AI governance program: data governance, privacy, and model integrity.

This is where the AI conversation often moves from excitement to discipline. Companies may be eager to deploy tools, automate functions, and improve decision-making. But none of that matters if the underlying data is flawed, sensitive information is mishandled, or the model produces outputs that are unreliable, biased, or impossible to explain in context—the more powerful the technology, the more important the governance framework beneath it.

For boards and CCOs, this is not simply a technical control matter. It is a governance matter because failures in data integrity, privacy management, and model performance can have legal, regulatory, reputational, financial, and cultural consequences simultaneously.

AI Governance Begins with the Data

There is an old saying in technology: garbage in, garbage out. In the AI era, that phrase remains true, but it is no longer sufficient. In corporate governance terms, the problem is not merely bad data. It is unknown, unauthorized, untraceable, biased, stale, overexposed, or used in ways the organization never properly approved. That is why data governance is the control foundation of AI governance.

Every AI use case depends on inputs. Those inputs may include structured internal data, public information, personal data, third-party data, proprietary records, historical documents, transactional records, prompts, or user interactions. If management does not understand where that data comes from, who has rights over it, whether it is accurate, how it is classified, and whether it is appropriate for the intended purpose, then the company is not governing AI. It is merely using it.

For compliance professionals, this point should feel familiar. Data governance is not new. What is new is the speed and scale at which AI can amplify data weaknesses. A spreadsheet error may affect one report. A flawed AI input may affect thousands of interactions, recommendations, or decisions before anyone notices.

Why Boards Should Care About Data Lineage

Boards do not need to become technical experts in model training or data architecture. But they do need to ask whether management understands the provenance and reliability of the information flowing into critical AI systems.

At a governance level, this is a question of data lineage. Can the company trace the source of the data, how it was curated, whether it was changed, and whether it was approved for the intended use? If a customer, regulator, employee, or auditor asks why the system reached a particular result, can management explain not only the output, but the data conditions that shaped it?

A board that does not ask these questions risks receiving polished dashboards and impressive demonstrations while missing the underlying weaknesses. AI systems can sound authoritative even when they are wrong. That is part of what makes governance here so essential. Confidence is not the same as integrity.

This is also where the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) offers a helpful mindset. The ECCP pushes companies to think in terms of operational reality. Do policies work in practice? Are controls tested? Is the company learning from what goes wrong? The same discipline applies here. A company should not assume its data environment is fit for AI simply because it has data available. It should test, verify, document, and challenge that assumption.

Privacy Is Not an Adjacent Issue

Too many organizations still treat privacy as adjacent to AI governance rather than central to it. That is a mistake. AI systems often rely on data sets that include personal information, employee information, customer records, usage patterns, communications, or behavior-based inputs. Even when a company believes it has de-identified or anonymized data, there may still be re-identification risks, overcollection concerns, retention issues, or use limitations tied to law, contract, or internal policy.

For the board and the CCO, privacy should not be discussed as a compliance side note. It should be part of the approval and governance architecture from the outset. Before an AI use case is deployed, management should understand what personal data is involved, whether its use is permitted, what notices or disclosures apply, what access restrictions are required, how the data will be retained, and whether any vendor relationships create additional privacy exposure.

This is particularly important in generative AI environments, where employees may paste confidential, proprietary, or personal information into tools without fully appreciating the consequences. A privacy incident in the AI context may not begin with malicious intent. It may begin with convenience. That is why governance must focus not only on policy, but on system design, training, and usage constraints.

The CCO has a critical role here because privacy governance often intersects with policy management, employee conduct, training, investigations, and disciplinary response. If privacy is left solely to specialists without integration into the broader governance process, the organization risks building fragmented controls that do not hold together under pressure.

Model Integrity Is a Governance Question

Model integrity sounds like a technical term, but it is really a governance concept. It asks whether the system is performing in a manner consistent with its intended purpose, risk classification, and control expectations.

That means asking hard questions. Is the model accurate enough for the use case? Has it been validated before deployment? Are there known limitations? Does it perform differently across populations or scenarios? Can outputs be reviewed in a meaningful way by human decision-makers? Are there conditions under which the model should not be used? These are not engineering questions alone. They are governance questions because they determine whether management is relying on the system responsibly.

This is where NIST’s AI Risk Management Framework is especially valuable. NIST emphasizes that organizations should map, measure, and manage AI risks, including those related to validity, reliability, safety, security, resilience, explainability, and fairness. It is not enough to say that a tool works most of the time. The organization must understand where it may fail, how failure will be detected, and what safeguards are in place when it does.

ISO/IEC 42001 reinforces the same discipline through the lens of management systems. It requires structured attention to risk identification, control design, monitoring, documentation, and continual improvement. In other words, it treats model integrity not as a technical aspiration, but as an organizational responsibility. For boards, the takeaway is direct: if management cannot explain how model integrity is validated and maintained, then the board does not yet have assurance that AI is being governed effectively.

Third Parties Increase the Stakes

One of the more dangerous assumptions in AI governance is that outsourcing technology also outsources risk. It does not. Many organizations will deploy AI through third-party vendors, embedded tools, software platforms, or external service providers. That may be practical, even necessary. But it also means the company may be relying on data practices, training methods, model assumptions, or privacy safeguards it did not design and cannot fully see.

That is why data governance, privacy, and model integrity must extend to third-party risk management. Procurement cannot focus solely on functionality and price. Legal cannot focus solely on contract form. Compliance, privacy, security, and risk all need to understand what the vendor is doing, what data is being used, what rights the company has to inspect or question performance, and what happens when the vendor changes the model or its underlying terms.

This is not simply good vendor management. It is a governance necessity. A company remains accountable for business decisions made using third-party AI tools, especially when those tools affect customers, employees, compliance obligations, or regulated activities.

Documentation Is What Makes Governance Real

As with every major governance issue, documentation is what turns theory into evidence. If a company is serious about data governance, privacy, and model integrity, it should have records that show it. Those records may include data inventories, data classification standards, model validation summaries, privacy assessments, vendor due diligence files, testing results, approved use cases, control requirements, escalation logs, and remediation actions. Without this documentation, governance becomes anecdotal. With it, governance becomes reviewable, auditable, and improvable.

This is another place where the ECCP mindset is so useful. Prosecutors and regulators tend to ask the same core question in different ways: how do you know your program works? In the AI context, the answer cannot be “our vendor told us so” or “the business says the tool is helpful.” It must be grounded in evidence, testing, and management discipline.

What Boards and CCOs Should Be Pressing For

Boards should expect management to present AI use cases with enough clarity to answer four questions. What data is being used? What privacy implications attach to that use? How has model integrity been tested? What controls will remain in place after deployment?

CCOs should press equally hard from the management side. Is there a documented data governance process for AI? Are privacy reviews built into the intake and approval process? Are models validated according to risk? Are third-party tools subject to diligence and contract controls? Are incidents and anomalies logged and investigated? Are employees trained not to expose confidential or personal information through improper use? These are not burdensome questions. They are the practical questions that separate governed AI from hopeful AI.

Governance Requires Trustworthy Inputs and Defensible Outputs

In the end, AI governance depends on a simple but demanding truth: the organization must be able to trust what goes into the system and defend what comes out of it.

If the data is poorly governed, privacy rights are handled casually, or model integrity is assumed rather than demonstrated, then no amount of strategic enthusiasm will make the program safe. Boards will not have real oversight. CCOs will not have a defensible control environment. The company will merely have a faster way to create risk.

That is why data governance, privacy, and model integrity are not support issues in AI governance. They are central issues. They determine whether the enterprise is using AI with discipline or simply hoping for the best.

In the next article in this series, I will turn to the fourth governance challenge: ongoing monitoring, where many organizations discover that approving an AI use case is far easier than governing it after it goes live.

Categories
Life with GDPR

Life With GDPR – Endpoint Security and Data Protection: Uncovering the Hidden Compliance Risks in Printer Security with Jim LaRoe

Jonathan Armstrong remains on assignment. Today, Tom Fox visits with fellow Texan Jim LaRoe, CEO of Symphion, to discuss data privacy, data protection, and compliance related to printer security in one of the most interesting podcasts Tom has done in some time.

Jim provides insight into how 20-30% of network endpoints are printers, and alarmingly, 99% of these are unprotected. Printers, despite being integral to business functions, are typically left vulnerable, making them prime targets for sophisticated phishing and cyber-attacks. Jim shares his journey from a trial lawyer to founding Symphion in 1999 and explains Symphion’s groundbreaking work in developing comprehensive security software for printers. Jim highlights the importance of a culture of compliance in managing endpoint security and the multifaceted challenges that come with securing printers.  He emphasizes the collaborative effort needed among GRC compliance teams, IT, and supply chain departments to manage printer security effectively, and offers actionable steps for businesses to mitigate these risks.

Key takeaways:

  • The Hidden Risk of Printers
  • Understanding Endpoint Security
  • Challenges in Printer Security
  • Risk Management Strategies
  • Supply Chain Vulnerabilities

Resources:

Connect with Tom Fox

Connect with Jim LaRoe

Connect with Symphion

The award-winning Life with GDPR was recently honored as a Top Data Security Podcast. This was a sponsored podcast.

Categories
FCPA Compliance Report

FCPA Compliance Report – Ethical Challenges in AI, Data Protection, and Sports with André Paris

Welcome to the award-winning FCPA Compliance Report, the longest-running podcast in compliance. Today, Tom Fox welcomes back André Paris for an insightful discussion on various ethical challenges in today’s world. André revisits his role in compliance and ethics and provides updates on his work since the pandemic and delves into the issues of algorithmic bias, transparency, and the ethical ramifications of AI systems, particularly in surveillance and privacy. André also shares his experience as a PhD candidate researching AI’s impact on civil liberties. The episode further explores the ethical challenges in the sports industry, including corruption, doping, and harassment. Lastly, André talks about his book ETHICS & TRANSPARENCY: A Path To Compliance on Amazon and its practical applications in fostering an ethical corporate culture.

Key highlights include:

  • André‘s Role in Compliance and Ethics
  • Ethics and Transparency: André’s Book
  • The Rise of AI and Ethical Challenges
  • AI in Business and Research Applications
  • Data Protection as a Civil Liberty
  • Ethical Challenges in Sports

Resources:

André Paris on LinkedIn

ETHICS & TRANSPARENCY: A Path To Compliance on Amazon

André Paris Website

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com.

Categories
Innovation in Compliance

Innovation in Compliance: The Critical Importance of Mobile Application Security: Insights from Subho Halder

Innovation comes in many areas, and compliance professionals need to not only be ready for it but also embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, host Tom Fox visits Subho Halder, the CEO & Co-Founder of Appknox, to discuss the often-overlooked yet crucial topic of mobile application security in the corporate compliance world.

Halder shares his extensive background in mobile app security, including developing the first mobile malware and presenting at prestigious conferences like Black Hat and DEF CON. The conversation covers the evolving market need for specialized mobile app security tools, the unique challenges faced by mobile applications compared to web applications, and the critical importance of integrating security early in the development lifecycle—a concept known as the ‘left shift’ approach. Halder also explores AI-powered cyberattacks and how Appknox is utilizing AI to develop defensive strategies. The discussion highlights regulatory blind spots in the US regarding mobile security, the challenges of managing mobile app security in large multinational corporations, and best practices for ensuring robust mobile app security.

Key highlights:

  • Market Need and Opportunity for AppKnox
  • Appknox Security Assessment of Perplexity’s Android App
  • Regulatory Blind Spots in US Cybersecurity Frameworks
  • Engaging with Large Multinational Companies
  • AI-Powered Cyber Attacks and Defensive Strategies
  • Importance of the Left Shift Approach in Mobile App Security

Resources:

Subho Halder on LinkedIn

Appknox

Appknox Resources Page

Appknox Blog: Is Perplexity AI Safe to Use? Security Flaws in the Android App

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
FCPA Compliance Report

FCPA Compliance Report: Jonathan Armstrong on Sweeping Changes in The UK Government: Insights on Compliance

Welcome to the award-winning FCPA Compliance Report, the longest running podcast in compliance. In this edition of the FCPA Compliance Report, Tom Fox welcome Jonathan Armstrong to discuss the seismic shift in the UK’s political landscape following the election last week.

The election was literally one for the ages. It led to a significant Labor victory over the Conservatives. They delve into the implications for compliance and governance in both the UK and globally. Topics include the new government’s proactive approach, anticipated shifts in bribery enforcement, and fiscal policies.

They also explore potential changes in AI regulation, employment law, data protection, and international relations, especially concerning Russia and China. The conversation highlights Labor’s balanced strategy, aiming for sensible, centrist policies while addressing key issues like corruption, AI, and data privacy.

Highlights in this Episode:

  • An election result for the ages
  • Impact on Bribery and Corruption Enforcement
  • Trade Sanctions, Russian Oligarch’s and Forced Labor
  • AI and Beyond
  • Data Privacy and Data Protection
  • Labor and Employment Rights

 Resources:

Jonathan Armstrong on LinkedIn

UK General Election 2024 – What Might This Mean for Compliance?

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

For more information on the Ethico ROI Calculator and a free White Paper on the ROI of Compliance, click here.

Categories
FCPA Compliance Report

FCPA Compliance Report: DOJ on AI and Data/Intellectual Property Protection

Welcome to the award-winning FCPA Compliance Report, the longest-running podcast in compliance. In this special edition, Tom welcomes Jessica Nall, a partner at Baker McKenzie who leads the firm’s West Coast investigations and compliance practice, and Maria Piontkovska, a Senior Associate in the same practice group.

We deeply dive into their article about the recent speeches by Department of Justice representatives at the ABA White Collar Conference on the new DOJ whistleblower program, AI, data protection, and intellectual property protection.

Jessica Nall and Maria Piontkovska are prominent legal professionals specializing in white-collar defense and corporate investigations. Jessica, a seasoned attorney with over 20 years of experience, leads Baker McKenzie’s white-collar practice in California, and Maria is a skilled US white-collar attorney originally from Ukraine.

Both regard the ABA White Collar Conference as an essential platform for the defense bar, government investigators, and compliance leaders to gather for discussions and networking. Nall sees the conference as vital for disseminating new compliance expectations and enforcement trends announced by government officials. At the same time, Piontkovska highlights the importance of the direct line of communication with these officials, providing insights straight from the source.

Their perspectives on the conference are shaped by their extensive experiences in the field and drive their contributions to the discussions and policies related to white-collar defense and compliance.

Topics Covered in This Episode:

  • Key Figures Discussing Trends in Compliance
  • Corporate Transparency Incentive Initiative
  • Financial Incentives for Anti-Corruption Self-Disclosure
  • Navigating Risks: AI in Corporate Compliance
  • Data Mapping for International Data Security

Resources:

Jessica Nall on LinkedIn

Maria Piontkovska on LinkedIn

Compliance Steps After ABA White Collar Crime Conference

United States: Department of Justice announces new corporate compliance directives for AI along with increased penalties for AI-related misconduct

Baker McKenzie

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

 

For more information on the Ethico ROI Calculator and a free White Paper on the ROI of Compliance, click here.

Categories
Regulatory Ramblings

Regulatory Ramblings: Episode 42 – The Intersection of Digital Assets and Data Protection with Jonathan Crompton

Jonathan Crompton is a partner at the law firm Reynolds, Porter & Chamberlain (RPC), based in Hong Kong. There, he helps companies and individuals navigate complex cross-border disputes and investigations involving their Asian operations. He specializes in commercial matters (particularly for the retail industry), financial services, technology-related disputes, and cyber incidents.

As the lead for RPC’s ‘ReSecure’ cyber incident response service in Asia, he advises local and multinational clients on cyber-attacks, data privacy, and law enforcement investigations. He also helps clients worldwide recover money transferred to Hong Kong bank accounts as a result of cyber and other frauds.

Jonathan advises on all forms of disputes, including litigation before national courts and arbitral tribunals operating under various rules (in particular, the HKIAC, ICC, and UNCITRAL) and on investigations by regulators (notably financial services regulators such as the Securities and Futures Commission). His clients include senior individuals, asset managers, and leading multinational corporations and brands. As a result of RPC’s predominantly ‘conflict-free’ model for financial services disputes, Jonathan represents senior individuals and companies in claims brought by or against leading banks where other firms are often unable to act.

He is also a founding member of the Hong Kong chapter of the Crypto Fraud and Asset Recovery (CFAAR) network, the first global association for such professionals. The London chapter was launched in London in 2021, and the Hong Kong chapter was formed in August 2022.

In this episode of Regulatory Ramblings, Jonathan chats with host Ajay Shamdasani about his background, upbringing, and how he ended up in the legal profession. The bulk of the conversation, however, is devoted to data protection and digital assets, specifically the February raid of the offices of WorldCoin by the Hong Kong Office of the Privacy Commissioner (PCPD). They discuss the PCPD’s expression of concern about WorldCoin’s collection and storage of iris scans in exchange for its WorldCoin token (WLD).

As Jonathan points out, the case was a clear example of the increasing intersection of personal data protection principles and digital assets. The conversation also covers his recent LinkedIn post in which he stated that Privacy Commissioner Ada Chung’s action was further proof that she was flexing her existing powers—even before the amendments to the territory’s Personal Data (Privacy) Ordinance are expected to be enacted within the next year.

They also discuss the shape Jonathan envisages those amendments taking, the recent cases he has seen in his practice involving virtual assets, digital contracts, and cybersecurity, and related emerging methodologies, trends, and themes.

Podcast Discussion:

  • 3:01  Journey from Military Roots to Legal Frontiers
  • 11:00  Perspectives on Legal Specialization in the Virtual Asset Sphere
  • 20:52  Understanding Cryptocurrency Fraud and Legal Challenges in Recovery
  • 29:16  Assessing the Efficacy of Asset Tracing Rules in Cryptocurrency Fraud Cases
  • 38:12  Money Mules, Cybercrime, and the Evolution of Financial Fraud
  • 42:48  Complexities of Cybercrime and Deepfake Deception in Financial Fraud
  • 45:29  Insights into Crypto Regulation and Risk Management from CFAAR
  • 59:34  Intersection of Personal Data and Digital Assets: Insights from WorldCoin and NFTs
  • 1:05:52  Personal Data Privacy: Insights into Legislative Amendments and Regulatory Enforcement in Hong Kong
  • 1:17:01  Adapting Legal Careers to Emerging Technologies, Change and Uncertainty

Connect with RR Podcast at:

LinkedIn: https://hk.linkedin.com/company/hkufintech 
Facebook: https://www.facebook.com/hkufintech.fb/
Instagram: https://www.instagram.com/hkufintech/ 
Twitter: https://twitter.com/HKUFinTech 
Threads: https://www.threads.net/@hkufintech
Website: https://www.hkufintech.com/regulatoryramblings 

Connect with the Compliance Podcast Network at:

LinkedIn: https://www.linkedin.com/company/compliance-podcast-network/
Facebook: https://www.facebook.com/compliancepodcastnetwork/
YouTube: https://www.youtube.com/@CompliancePodcastNetwork
Twitter: https://twitter.com/tfoxlaw
Instagram: https://www.instagram.com/voiceofcompliance/
Website: https://compliancepodcastnetwork.net/

Categories
Innovation in Compliance

Innovation in Compliance – Igor Volovich on Moving Towards Data – Driven, Risk – Based Compliance

Innovation comes in many areas and compliance professionals need to not only be ready for it but embrace it. One of those areas is telehealth and telemedicine. My guest in this episode is Igor Volovich, the Vice President of Compliance Strategy at Qmulos. This podcast is sponsored by Qmulos.

Igor Volovich brings a unique perspective to the table regarding the importance of executive accountability and proactive risk governance in cybersecurity. Volovich emphasizes the crucial role that executives play in ensuring compliance, controls, and security posture decisions, and criticizes the current model of firing and hiring Chief Information Security Officers as ineffective. He believes that risk governance should be a holistic business function, rather than separate departments handling different types of risks, and encourages boards of directors to question and challenge reports on compliance and risk posture. Drawing from his extensive experience and deep understanding of the field, Volovich advocates for a real-time convergence of compliance, security, and risk management. Join Tom Fox and Igor Volovich on this episode of the Innovation in Compliance podcast to delve deeper into these insights.

Key Highlights:

  • Maintaining Compliance Integrity through Executive Accountability
  • Misrepresentation of Compliance in Penn State
  • Moving Towards Data-Driven, Risk-Based Compliance
  • Data-Driven Risk Management for True Compliance
  • Incentivized Whistleblowing and Cybersecurity Accountability
  • Elevating Risk Governance for Effective Cybersecurity
  • Real-Time Compliance and Data-Driven Automation

Resources:

Igor Volovich on LinkedIn

Qmulos

 

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Life with GDPR

Life With GDPR: WhatsApp Breach: Hospital’s GDPR Failures Exposed

Tom Fox and Jonathan Armstrong, renowned experts in cyber security, co-host the award-winning Life with GDPR. The recent controversy surrounding Nigel Farage’s banking situation highlights the risks and compliance challenges faced by the banking industry in relation to data protection. In this episode, Tom and Jonathan discuss a data breach in a Scottish hospital during the COVID-19 pandemic.

The breach occurred when hospital staff shared patient details on WhatsApp, raising concerns about GDPR compliance. The hospital informed the ICO about the breach but chose not to notify affected patients, highlighting the need for appropriate advice and support when making such decisions. The conversation also explores communication challenges in internal investigations and the privacy and security risks of platforms like WhatsApp. It emphasizes the importance of organizations adapting to the preferences of digital native employees and conducting data protection impact assessments. The podcast also highlights the importance of effective policies, training, and proactive phishing training to prevent cyber-attacks and protect sensitive information.

 

Key Takeaways:

  • Data breach in Scottish hospital
  • The Challenges of Communication in Internal Investigations
  • Importance of Policies and Training
  • Phishing Training Effectiveness

Resources

For more information on the issues raised in this podcast, check out the Cordery Compliance News Section. For more information on Cordery Compliance, go to their website here. Also, check out the GDPR Navigator, one of the top resources for GDPR Compliance, by clicking here.

Connect with Tom Fox

Connect with Jonathan Armstrong

Categories
Blog

The Importance of Effective Policies and Training in Data Protection: Lessons from a Scottish Hospital Breach

I recently had the chance to visit with Jonathan Armstrong on a recent data breach case that occurred in the health service provider NHS Lanarkshire (Scotland) during the COVID-19 pandemic. This breach serves as a stark reminder of the challenges organizations face in maintaining data protection and compliance, especially when it comes to communication platforms like WhatsApp. In this blog post we will explore the lessons learned from this incident and discuss practical advice for organizations to ensure robust data protection measures.

Background

According to the Cordery Compliance Client Alert on the matter, over a two-year period between 2020 and 2022, 26 staff at NHS Lanarkshire had access to a WhatsApp group where there were a minimum of 533 entries that included patient names. The information included 215 phone numbers, 96 with dates of birth and 28 included addresses. 15 images, 3 videos, and 4 screenshots were also shared, which included personal data of patients and clinical information, which is a “special category” health data under both EU and UK law. Other data to the WhatsApp group was also added in error. Other communications were also identified where the staff in question had used WhatsApp.

WhatsApp was not approved by NHS Lanarkshire for processing personal data of patients.  The use of WhatsApp was an approach adopted by the staff apparently without organizational knowledge. It was used by the staff as a substitute for communications that would have taken place in the clinical office but did not do so after staff reduced office attendance due to the COVID-19 pandemic. No Data Protection Impact Assessment was in place and no risk assessment relating to personal data processing was completed concerning WhatsApp, as WhatsApp was not approved by NHS Lanarkshire for the sharing of personal data relating to patients. NHS Lanarkshire undertook an internal investigation and reported this matter to the ICO.

ICO Holding

The UK ICO determined that NHS Lanarkshire did not have the appropriate policies, clear guidance and processes in place when WhatsApp was made available to download. Additionally,  there were a number of infringements of UK GDPR, not the least being not implementing appropriate technical and organizational measures (TOMs) to ensure the security of the personal data involved, as a consequence of which personal data was shared via an unauthorized means and an inappropriate disclosure occurred. There was also a failure to report this matter, as a data breach, to the ICO in time.

Armstrong noted that ICO recommended that NHS Lanarkshire should take action to ensure their compliance with data protection law, including:

  1. Considering implementing a secure clinical image transfer system, as part of NHS Lanarkshire’s exploration regarding the storage of images and videos within a care setting;
  2. Before deploying new apps, consideration of the risks relating to personal data and including the requirement to assess and mitigate these risks in any approval process;
  3. Ensuring that explicit communications, instructions or guidance are issued to employees on their data protection responsibilities when new apps are deployed;
  4. Reviewing all organizational policies and procedures relevant to this matter and amending them where appropriate; and,
  5. Ensuring that all staff are aware of their responsibilities to report personal data breaches internally without delay to the relevant team.

Armstrong concluded that “In light of the remedial steps and mitigating factors the ICO issued an official reprimand – a fine has not yet been imposed. The ICO also asked NHS Lanarkshire to provide an update of actions taken within six months of the reprimand being issued.”

Discussion

This case highlights the challenges organizations face when it comes to communication during internal investigations. In many instances, the most interesting documents are not found in emails, as one organization discovered. Employees often turn to alternative platforms like WhatsApp to avoid leaving a paper trail. However, it is crucial to understand that these platforms may not provide the expected privacy and security.

While platforms like WhatsApp may seem secure, they still share data with big tech companies, raising concerns about privacy. Organizations must adapt to the preferences of digital-native employees who may find email restrictive and opt for alternative communication methods. However, this adaptation should be done consciously, ensuring that policies and procedures are in place to protect sensitive information. Armstrong emphasizes the importance of revisiting emergency measures implemented during the pandemic. As remote work continues, organizations must conduct thorough data protection impact assessments to ensure compliance across all communication platforms and measures.

As with all types of compliance, setting policies and procedures is just the first step. It is essential to communicate and educate employees on these policies to ensure their understanding and compliance. Annual online training sessions are not enough; organizations should provide engaging training that goes beyond passive learning. In addition to targeted and effective training there must be ongoing communications provided to employees. Armstrong also related on the ineffectiveness of off-the-shelf online phishing training. Waiting for an incident to occur and then providing training is not enough to prevent people from clicking on malicious links. Organizations should focus on providing better training before incidents happen, rather than trying to enhance training afterwards.

The next step is monitoring as compliance with policies and procedures should be actively monitored. Technical solutions are available to help companies track compliance, but it’s crucial to involve individuals at all levels of the organization when designing these policies. Additionally, a balanced approach is needed, where employees are recognized for their service but also held accountable for policy breaches. The days of solely relying on punishment for enforcement are gone.

The data breach in the Scottish hospital serves as a wake-up call for organizations to prioritize data protection and compliance. Communication challenges during internal investigations, privacy concerns associated with alternative platforms, and the need for effective policies and training are crucial areas to address. By conducting regular data protection impact assessments, providing engaging training, and ensuring buy-in from employees, organizations can strengthen their defense against cyber threats and protect sensitive information. Always remember that compliance is an ongoing process, and continuous evaluation and improvement are necessary to adapt to the evolving digital landscape. Finally stay vigilant and proactive in safeguarding data privacy and protection.