Categories
Great Women in Compliance

Great Women in Compliance: Privacy and AI Compliance – A Principled Approach

In this episode of the Great Women in Compliance podcast, Hemma and Ellen host a roundtable with Hope Anderson, a partner in White & Case’s Data, Privacy & Cybersecurity Practice, and Jean Liu, Assistant General Counsel, Privacy, Safety, and Regulatory Affairs who joined Microsoft in 2023 as part of the Nuance Communications, Inc. acquisition.

Hope and Jean have a wealth of experience advising on privacy, AI, and data governance compliance issues, and they are well-positioned to leverage this experience in the wake of a rapidly evolving regulatory landscape. Hemma and Ellen didn’t waste a minute mining these two experts for practical tips and recommendations for those of us looking to get smart quickly and grapple with what seems like a behemoth task of keeping up with developments in technology and legislation while at the same time, making sure we don’t get left behind in learning to leverage AI in our functions.

Join us for an engaging ride through the ups and downs of privacy and AI compliance, and be inspired as we were by the great opportunities to develop new and exciting use cases while mitigating risk and the chance to unlock the power of responsible and ethical AI for our businesses.

Key Highlights:

  • Getting up to speed with the rapidly evolving regulatory landscape

  • The role of AI principles vs policies and procedures

  • Human Rights, Bias, and AI

  • Keeping the “Human in the Loop”

  • Thoughts on a US Federal AI or Privacy Law

  • Leveraging AI for Ethics and Compliance

  • Key resources and recommendations

Resources:

Join the Great Women in Compliance community on LinkedIn here.

Guest Bios:

Hope Anderson is a partner in White & Case’s Data, Privacy & Cybersecurity Practice, based in Los Angeles. She has extensive experience advising on all aspects of privacy and is at the forefront of Generative AI, advising on the technology’s legal implications and practical applications. A member of the Firm’s Global Technology Industry Group, Hope has extensive experience in privacy and product counseling. She advises on e-commerce, privacy by design, Generative AI, AR/VR, biometrics, analytics, and issues implicating consumer protection, marketing, and advertising laws.

Jean C. Liu is an Assistant General Counsel in the Privacy, Safety, and Regulatory Affairs division and joined Microsoft in 2023 as part of the Nuance Communications, Inc. acquisition. Immediately before its acquisition, Jean served as Nuance’s Vice President and Chief Legal, Compliance, and Privacy Officer, leading the global legal, compliance, and privacy functions. She developed and implemented data privacy policies and practices to ensure that customer and business data, including protected health information, is strictly governed and privacy is maintained. Jean has over 29 years of experience leading compliance and privacy programs, successfully managing data incidents, including regulatory investigations, and implementing best governance and risk management practices across multiple industries.

Categories
The Hill Country Podcast

The Hill Country Podcast: The Entrepreneurial Spirit in Kerrville – Wynita Walther’s Transportation Service

Welcome to award-winning The Hill Country Podcast. The Texas Hill Country is one of the most beautiful places on earth.

In this podcast, Hill Country resident Tom Fox visits with the people and organizations that make this the most unique area of Texas. This week, Tom welcomes back Wynita Walther to discuss her thriving transportation business in Kerrville and Kerr County.

Wynita shares how she identified a need for a stylish and efficient transportation service, especially in a growing community without Uber or similar options. They delve into the business’s evolution, market adaptation, and the importance of a reliable transportation service for both locals and frequent travelers.

The discussion highlights Wynita’s grassroots marketing strategy, her plans for expansion, and the broader entrepreneurial opportunities available in Kerrville. Tom and Wynita also emphasize the support system and lifestyle benefits of starting a business in this vibrant micropolis.

Key Highlights:

  • Identifying the Need for a Transportation Service
  • Launching and Growing the Business
  • Marketing Strategies and Community Engagement
  • Opportunities for Young Entrepreneurs in Kerrville
  • Future Plans

Resources:

Wynita Walther on Facebook

Away Car Service

Other Hill Country Focused Podcasts

Hill Country Authors Podcast

Hill Country Artists Podcast

Texas Hill Country Podcast Network

Categories
Compliance Tip of the Day

Compliance Tip of the Day: AI Powered Internal Controls

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements.

Whether you’re a seasoned compliance professional or just starting your journey, our aim is to provide you with bite-sized, actionable tips to help you stay on top of your compliance game.

Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law.

Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

In today’s episode, we begin a weeklong look at some of the ways Generative AI is changing compliance and risk management. Today we look at how to set up AI-powered internal controls from a compliance perspective.

 

For more information on the Ethico ROI Calculator and a free White Paper on the ROI of Compliance, click here.

Categories
Daily Compliance News

Daily Compliance News: June 12, 2024 – The Russian Timber Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee and listen to the Daily Compliance News. All from the Compliance Podcast Network.

Each day, we consider four stories from the business world: compliance, ethics, risk management, leadership, or general interest for the compliance professional.

In today’s edition of Daily Compliance News:

  • Russian timber and export control. (WSJ)
  • What happens when the Rule of Law dies out? (FT)
  • Uribe says Menendez was ‘all in’ on bribery and corruption. (WaPo)
  • U.A.W. Monitor Investigates Accusations Against Union Leader (NYT)

For more information on the Ethico ROI Calculator and a free White Paper on the ROI of Compliance, click here.

Categories
Blog

AI in Compliance Week: Part 3 – Embracing AI-Powered Internal Controls

Integrating artificial intelligence (AI) into internal controls is pivotal in the ever-evolving corporate governance landscape. We have closely followed the discussion around this emerging trend and the insights from industry experts like Jonathan Marks. In Part 3 of my five-part blog post series, I will explore the key considerations and best practices for leveraging AI to enhance an organization’s internal control framework.

Let’s start with the basics: ‘ What are internal controls?’ The best answer I have ever heard is still provided by Jonathan Marks, who says, “Internal controls are the mechanisms, rules, and procedures implemented by an organization to ensure the integrity of financial and accounting information, promote accountability, and prevent fraud. They encompass the entire control environment, including the attitude, awareness, and actions of management and others concerning the internal control system and its importance to the entity.”

Consider that the foundation of any successful AI application lies in the quality and accessibility of data. Organizations must ensure that the data feeding into their AI systems is accurate, comprehensive, and the definitive “source of truth.” Failure to address data quality issues can lead to incorrect outputs that undermine the effectiveness of specific control mechanisms. Establishing robust data management practices, including data governance and integration, is crucial for unlocking the full potential of AI-powered internal controls. This is equally true for internal controls.

Effective implementation of AI-driven internal controls requires a skilled workforce. Companies must invest in developing internal capabilities to handle these advanced tools and accurately analyze the results. This may involve training existing employees, hiring specialized talent, and fostering a culture of continuous learning. Understanding the nuances of machine learning, natural language processing, and other AI techniques is essential for internal teams to leverage these technologies successfully. For the compliance professional, it may mean adding expertise or partnering with internal audit or your internal controls team to garner the talent needed to move to AI-powered internal controls.

The integration of AI into internal controls raises important ethical considerations. Acknowledging and addressing the inherent biases that can exist within specific AI algorithms is imperative. By creating AI systems that are open, fair, and responsible, organizations can preserve stakeholder trust and uphold their ethical norms. Incorporating ethical principles and bias mitigation strategies into designing and deploying AI-powered internal controls is critical.

Successful implementation of AI-driven internal controls often requires close collaboration with technology providers. Companies and compliance professionals should seek out respected partners who can offer customized solutions that align with their specific internal requirements. These collaborations can provide continuous assistance as the intelligence and capabilities of the AI systems evolve. By fostering a collaborative environment, companies can ensure that the integration of AI into their internal control framework is seamless and practical.

Key Considerations for AI-Powered Internal Controls

There are a few key considerations for organizations to ensure the ethical deployment of AI-powered internal controls:

  1. Transparency and Explainability: The AI system’s decision-making process should be as transparent and explainable as possible. Organizations should be able to explain how the system arrives at its decisions and recommendations and provide clear documentation on the data, algorithms, and assumptions used.
  2. Fairness and Non-Discrimination: The AI system should be carefully audited to ensure it does not exhibit biases or discriminate against protected groups. Organizations should implement testing and monitoring processes to detect and mitigate unfair or discriminatory outcomes.
  3. Human Oversight and Accountability: Clear human oversight and accountability measures should be implemented. Employees should be able to understand, challenge, and override the AI system’s decisions when appropriate. There should also be defined processes for addressing errors or unintended consequences.
  4. Data Privacy and Security: The data used to train and operate the AI system must be adequately secured and protected to respect employee privacy. Organizations should have robust data governance policies and procedures in place.
  5. Ongoing Monitoring and Adjustment: The ethical performance of the AI system should be continuously monitored, and organizations should be prepared to adjust or refine as issues are identified. This may require establishing an AI ethics review board or similar governance structure.
  6. Alignment with Organizational Values: The deployment of the AI system should be aligned with the organization’s ethical principles and values. There should be a clear understanding of how the system supports the organization’s mission and commitment to employee wellbeing.
  7. Employee Engagement and Education: Employees should be informed about using AI-powered internal controls and receive training on interacting with the system. This can help build trust and ensure the system is used appropriately.

By addressing these key areas, organizations can work towards the ethical deployment of AI-powered internal controls and build trust with their employees. Collaboration with ethicists, legal experts, and other stakeholders can help refine best practices in this rapidly evolving landscape. However, this remains an evolving and complex area that requires ongoing vigilance and adaptation.

Ethical AI Deployment

There are some examples of organizations that have successfully navigated the challenges of ethical AI deployment.

Microsoft has been faced with ensuring fairness and mitigating bias in AI systems. To meet this, the company developed a comprehensive, Responsible AI Standard outlining principles and practices for ethical AI development.

IBM was challenged to achieve transparency and explainability in AI-powered decision-making. To meet this challenge, IBM has invested in explainable AI (XAI) technologies, such as its AI Explainability 360 toolkit. This enables developers to understand and interpret the inner workings of their AI models.

Google faced privacy and security concerns when using employee data for AI development. Google has established a Responsible AI Principles framework emphasizing data privacy and security, including differential privacy and secure multi-party computation techniques.

Salesforce must ensure alignment between AI-powered tools and the organization’s ethical values. To this end, it developed guidance through its AI Ethics & Humanism Council on the responsible development and use of AI across the company. This includes aligning AI systems with Salesforce’s core values.

Anthem needs to gain employee trust and acceptance in using AI-powered internal controls. To do so, Anthem has implemented an “AI Ambassadors” program, where select employees are trained to help their colleagues understand and navigate the company’s AI-powered systems, fostering greater acceptance and trust.

These examples demonstrate how leading organizations have proactively addressed the ethical challenges of AI deployment through a combination of technical, policy, and organizational approaches. By prioritizing principles like fairness, transparency, privacy, and alignment with corporate values, these companies have made progress in ensuring the responsible and trustworthy use of AI within their organizations, particularly around AI-powered internal controls.

Both compliance and internal audit professionals must recognize the pivotal role that AI can play in enhancing the effectiveness of internal controls. By proactively exploring the incorporation of AI into their control mechanisms, organizations can gain a significant advantage in managing the complexities of modern enterprises and the ever-increasing data landscape. The deliberate integration of AI into internal controls will be a crucial factor in determining the success and resilience of an organization’s overall governance framework.

Integrating artificial intelligence into internal controls represents an opportunity for organizations to strengthen their control environment and make more informed decisions. Compliance professionals can help AI-powered internal controls become a cornerstone of effective corporate governance by addressing data quality, skill development, ethical considerations, and collaboration. I am excited to see how this technology continues to evolve and reshape the way we approach internal control systems and your compliance program.

Join us tomorrow as we examine the role of compliance in keeping AI decisions fair and unbiased.

Categories
Innovation in Compliance

Innovation in Compliance: Lori Darley on Conscious Leadership

Innovation comes in many forms, and compliance professionals need to not only be ready for it but also embrace it.

In this episode, Tom Fox interviews Lori Darley, a former professional dancer and current leadership coach.

Lori shares her career evolution from dance to founding Conscious Leaders, a coaching firm specializing in leadership development. She discusses the principles of self-awareness, personal responsibility, and the clearing process, which are central to her coaching philosophy.

Lori also emphasizes the importance of intentional leadership in fostering a positive corporate culture and touches on her experience in the compliance arena. Additionally, she talks about her book, ‘Dancing Naked,’ which explores her journey and insights as a conscious leader.

Key Highlights:

  • Lori Darley’s Professional Journey
  • What is Conscious Leaders?
  • The Clearing Process Explained
  • Conscious Leaders Wisdom Circle
  • Impact on Corporate Culture
  • Generational Tensions and Coaching Benefits

Resources:

Lori Darley on  LinkedIn 

Conscious Leaders

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Daily Compliance News

Daily Compliance News: June 11, 2024 – The Hands Dirty Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee and listen to the Daily Compliance News. All from the Compliance Podcast Network.

Each day, we consider four stories from the business world: compliance, ethics, risk management, leadership, or general interest for the compliance professional.

In today’s edition of Daily Compliance News:

  • Beny Steinmetz profile.  (OCCPR)
  • Try being less cynical at work. (WSJ)
  • Moelis director under scrutiny for scuffle. (FT)
  • The court brings racism back into disaster recovery loans. (WaPo)

For more information on the Ethico ROI Calculator and a free White Paper on the ROI of Compliance, click here.

Categories
Compliance Tip of the Day

Compliance Tip of the Day: AI Governance Framework

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements.

Whether you’re a seasoned compliance professional or just starting your journey, our aim is to provide you with bite-sized, actionable tips to help you stay on top of your compliance game.

Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law.

Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

In today’s episode, we begin a weeklong look at some of the ways generative AI is changing compliance and risk management. Today, we consider how to approach a comprehensive AI governance framework.

For more information on the Ethico ROI Calculator and a free White Paper on the ROI of Compliance, click here.

Categories
Blog

AI in Compliance Week: Part 2 – A Comprehensive Governance Approach

We continue our weeklong exploration of issues related to using Generative AI in compliance by examining some AI governance issues. In the rapidly evolving landscape of AI, the importance of robust governance frameworks cannot be overstated. The need for comprehensive governance structures to ensure compliance, ethical alignment, and trustworthiness has become paramount as AI systems become increasingly integrated into compliance. Today, we will consider the critical areas of compliance governance and ethics governance and present a holistic approach to mitigating the risks associated with these issues.

MIA AI Governance: The Problems

Missing compliance governance can have far-reaching consequences, undermining the integrity of an entire AI-driven initiative. Businesses must ensure alignment with enterprise-wide governance, compliance, and control (GRC) frameworks. This includes aligning with model risk management practices and embedding robust compliance checks throughout the AI model lifecycle. By promoting awareness of how the AI model works at your organization, you can minimize information asymmetries between development teams, users, and target audiences, fostering a culture of transparency and accountability.

The lack of ethical governance can lead to misalignment with an organization’s values, brand identity, or social responsibility. The answer is that companies should develop comprehensive AI ethics governance methods, including defining ethical principles, establishing an AI ethics review board, and creating a compliance program that addresses ethical concerns. Adopting frameworks like Ethically Aligned AI Design (EAAID) can help integrate ethical considerations into the design process while incorporating AI governance benchmarks beyond traditional measurements to encompass social and moral accountability.

Another outcome of the lack of trustworthy or responsible AI governance can result in unintentional and significant damage. To address this, compliance professionals should help develop accountable and trustworthy AI governance methods that augment enterprise-wide GRC structures. This can include establishing a committee such as an AI Advancement Council or similar structure in your company to oversee mission priorities and strategic AI advancement planning, collaborating with service line leaders and program offices to align with ethical AI guidelines and practices, and developing compliance programs to guide conformance with ethical AI principles and relevant legislation. Finally, implementing AI-independent verification and validation processes can help identify and manage unintentional outcomes.

The Solution

By addressing the critical areas of compliance governance and ethics governance through a more holistic approach, businesses can create a comprehensive framework that mitigates the risks associated with the absence of these crucial elements. This approach ensures that AI systems comply with relevant regulations and standards and align with your company’s values, ethical principles, and the pursuit of trustworthy and responsible AI. As the AI landscape evolves, this comprehensive governance framework will be essential in navigating the complexities and safeguarding the integrity of AI-driven initiatives.

Here are some key steps compliance professionals and businesses can think through to facilitate AI governance in your company:

  1. Establish a Centralized AI Governance Body:
    • Create an AI Governance Council that oversees your organization’s AI strategy, policies, and practices.
    • Ensure the council includes representatives from various stakeholder groups, such as legal, compliance, ethics, risk management, IT, and other subject matter experts.
    • Empower the council to develop and enforce AI governance frameworks, guidelines, and processes.
  2. Conduct AI Risk Assessments:
    • Identify and assess the risks associated with the organization’s AI initiatives, including compliance, ethical, and other compliance-related risks.
    • Prioritize the risks based on their potential impact and likelihood of occurrence.
    • Develop mitigation strategies and action plans to address the identified risks.
  3. Align AI Governance with Enterprise-wide Frameworks:
    • Ensure the AI governance framework is integrated with the organization’s existing GRC and Risk Management processes.
    • Establish clear lines of accountability and responsibility for AI-related activities across the organization.
    • Integrate AI governance into the organization’s broader risk management and compliance programs.
  4. Implement Compliance Governance Processes:
    • Develop and enforce AI-specific compliance controls, policies, and procedures.
    • Embed compliance checks throughout the AI model lifecycle, from development to deployment and monitoring.
    • Provide training and awareness programs to educate employees on AI compliance requirements.
  5. Establish Ethics Governance Mechanisms:
    • Define the organization’s AI ethics principles, values, and code of conduct.
    • Create an AI Ethics Review Board to assess and monitor the ethical implications of AI initiatives.
    • Implement processes for ethical AI design, such as the Ethically Aligned AI Design methodology.
    • Incorporate ethical AI benchmarks and accountability measures into the organization’s performance management and reporting processes.
  6. Implement Reliance-Related Governance:
    • Develop responsible and trustworthy AI governance practices that align with the organization’s enterprise-wide GRC frameworks.
    • Establish an AI Advancement Council to oversee strategic AI planning and alignment with ethical guidelines.
    • Implement AI-independent verification and validation processes to identify and manage unintended outcomes.
    • Provide comprehensive training and awareness programs on AI risk management for employees, contractors, and other stakeholders.
  7. Foster a Culture of AI Governance:
    • Promote a culture of accountability, transparency, and continuous improvement around AI governance.
    • Encourage cross-functional collaboration and communication to address AI-related challenges and opportunities.
    • Review and update the AI governance framework regularly to adapt to evolving regulatory requirements, technological advancements, and organizational needs.

By following these steps, organizations can implement a comprehensive governance framework that addresses compliance, ethics, and reliance-related governance. This framework enables organizations to harness the power of AI while mitigating the associated risks. 

AI Governance Resources

There are several notable resources the compliance professional can tap into around this issue of AI governance practices. The Partnership on AI Partnership on AI is a multi-stakeholder coalition of leading technology companies, academic institutions, and nonprofit organizations. It has been at the forefront of developing best practices and guidelines for the responsible development and deployment of AI systems. It has published influential reports and frameworks, such as the Tenets of Responsible AI and the Model Cards for Model Reporting, which have been widely adopted across the industry.

The Algorithmic Justice League (ALJ) is a nonprofit organization dedicated to raising awareness about AI’s social implications and advocating algorithmic justice. It has developed initiatives such as the Algorithmic Bias Bounty Program, encouraging researchers and developers to identify and report biases in AI systems. The AJL has highlighted the importance of addressing algorithmic bias and discrimination in AI.

IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems is a multidisciplinary effort to develop standards, guidelines, and best practices for the ethical design, development, and deployment of autonomous and intelligent systems. It has produced key documents and reports, such as the Ethically Aligned Design framework, which guides the incorporation of ethical considerations into AI development.

The AI Ethics & Governance Roundtable is an initiative led by the University of Cambridge’s Leverhulme Centre for the Future of Intelligence. It brings together industry, academia, and policymaking experts to discuss emerging issues, share best practices, and develop collaborative solutions for AI governance. The roundtable’s insights and recommendations have influenced AI governance frameworks and policies at the organizational and regulatory levels.

These examples demonstrate the power of industry collaboration in advancing AI governance practices. By pooling resources, expertise, and diverse perspectives, these initiatives have developed comprehensive frameworks, guidelines, and standards being adopted across the AI ecosystem. Compliance professionals should avail themselves of these resources to prepare your company to take the next brave steps in the intersection of compliance, governance, and AI.

Categories
Riskology

Riskology By Infortal™ Episode 26: Election Risk – How Polls Lie

Welcome to Episode 26 of Riskology by Infortal™ – Election Risk: How Polls Lie. 

In this episode, Dr. Ian Oxnevad and Christopher Mason, Esq., illuminate the complexities and pitfalls that bedevil the world of political polling.

Across the globe, the winds of change are blowing. From the US to the UK and beyond, from pro-business shifts to the rise of populism, the world is in a state of flux. 

With over 50% of the world’s population heading to the voting booth, companies and investors are focused even more heavily on election polling.

However, over-reliance on polling presents risks as polls often fail to provide an accurate prediction of election outcomes. Companies should avoid overrelying on polls in shaping their operational and investment strategies.  

Polling inaccuracies are often attributed to various methodological challenges, including the design of survey questions, the selection of survey participants, and the interpretation of data collected from a subset of the population.

Enhanced technology and societal shifts demand new strategies to gauge public opinion accurately. Pollsters are struggling to keep pace in a world that no longer picks up the phone.

In addition, elections aren’t just political; they’re potential game-changers for your industry. Staying informed can mean the difference between missing out and moving ahead. 

Instead of relying on polling alone, it is best practice to employ multifaceted analysis that incorporates polling insights, along with a comprehensive assessment of political, economic, and social trends. 

We hope you join us for this timely conversation on how your business can prepare for the upcoming election season and avoid the pitfalls of overreliance on polls. 

Resources:

Infortal Worldwide

Email

Dr. Ian Oxnevad on LinkedIn

Chris Mason on LinkedIn