Categories
Compliance Into the Weeds

Compliance into the Weeds: Navigating DOJ’s Boeing Dilemma Under DPA Violations

The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to more fully explore a subject.

Looking for some hard-hitting insights on compliance? Look no further than Compliance into the Weeds!

In this episode, Tom Fox and Matt Kelly take a deep dive into the complexities surrounding the Department of Justice’s potential decision to criminally prosecute Boeing under its Deferred Prosecution Agreement (DPA) related to the 737 MAX crashes.

They explore the various facets of corporate justice, including retribution, remediation, and societal interests, as well as the challenges in balancing justice for the victims and the broader implications for public safety and corporate culture.

The discussion also covers the FAA’s role, the potential for new operational limits on Boeing, the impact and structure of compliance monitorships, and what compliance officers can learn from this high-stakes scenario.

Key Highlights:

  • DOJ and Boeing: The 737 MAX Dilemma
  • Corporate Justice: Individuals vs. Corporations
  • Balancing Justice and Corporate Interests
  • Deferred Prosecution Agreements: Compliance Challenges
  • Financial Penalties vs. Operational Limits
  • The Potential of Monitorships
  • FAA’s Role and Challenges
  • Compliance Lessons and Future Considerations

Resources:

Matt on Radical Compliance

 Tom 

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Compliance Tip of the Day

Compliance Tip of the Day: Strategic Considerations for Implementing AI in Compliance

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements.

Whether you’re a seasoned compliance professional or just starting your journey, our aim is to provide you with bite-sized, actionable tips to help you stay on top of your compliance game.

Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law.

Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

In today’s episode, we consider some of the strategic considerations for implementing AI in  your compliance program.

For more information on the Ethico ROI Calculator and a free White Paper on the ROI of Compliance, click here.

Categories
Trekking Through Compliance

Trekking Through Compliance – Episode 15 – Compliance Lessons from Shore Leave

In this episode of Trekking Through Compliance, we consider the episode Shore Leave, which aired on December 29, 1966, with a Star Date of 3025.3.

This is one of the most fun and beloved TOS episodes. It begins with the Enterprise discovering  Omicron Delta, which appears to be the ideal location for rest for the Enterprise crew. However, strange things soon start to happen to the landing party. McCoy sees Alice and a white rabbit; Sulu finds an antique Police Special gun; Don Juan and Esteban Rodriguez accost Yeoman Barrels; and Angela sees birds. Kirk cancels shore leave for the rest of the crew but is confronted with practical joker Finigan from Starfleet Academy on the one hand and his former girlfriend Ruth on the other.

Spock reports from the Enterprise that he has detected a sophisticated power field on the planet that is draining the Enterprise’s energy. Spock beams down to help investigate, just as communications with the ship are becoming impossible. After asking Kirk what he was thinking about before encountering Finigan, Spock realizes that the apparitions are being created out of the minds of the landing party. The planet’s caretaker appears with McCoy. The caretaker apologizes for the misunderstandings and offers the services of the amusement park planet to the Enterprise’s weary crew.

Commentary

In this episode of Trekking Through Compliance, host Tom Fox delves into the beloved Star Trek episode ‘Shore Leave.’ The story follows the crew of the Enterprise as they encounter strange phenomena on a seemingly perfect shore leave planet, leading to various bizarre and surreal experiences. Fox extracts valuable compliance lessons from the episode, emphasizing the importance of incorporating fun and games into training for better engagement. He also discusses leadership principles such as leading by example, fostering integrity, clear communication, distributed leadership, and adaptability. The episode is a blend of adventure, whimsical elements, and practical insights for compliance professionals aiming to cultivate a culture of trust and ethical behavior in their organizations.

Key Highlights

  • Strange Happenings on the Planet
  • Kirk’s Encounters and Investigations
  • The Planet’s Secrets Revealed
  • Fun Facts and Behind the Scenes
  • Compliance Lessons from Shore Leave

Resources

Excruciatingly Detailed Plot Summary by Eric W. Weisstein

MissionLogPodcast.com

Memory Alpha

Categories
Trekking Through Compliance

Trekking Through Compliance – Episode 14 – Compliance Lessons from Balance of Terror

In this episode of Trekking Through Compliance, we consider the episode Balance of Terror, which aired on December 15, 1966, Star Date 1709.1

Enterprise investigates the lack of response from Earth outposts 2 and 3, monitoring the Neutral Zone between planets Romulus and Remus and the rest of the galaxy. The Earth outposts were constructed on asteroids and were authorized by a treaty following the atomic war with the Romulans more than a century earlier. No human or Romulan, however, has ever seen the other.

As the Enterprise communicates with Outpost 4, Commander Hansen reports an attack underway by an unknown weapon from a spaceship, which subsequently vanished. The Romulan commander questions his mission of starting a war and discusses it with his Centurion—the Enterprise and Romulan ship exchange fire. The Enterprise then sits motionless, hoping the Romulan ship will make a move and reveal itself. They do so, and the Romulan ship is rendered inoperative, and its captain self-destructs.

Commentary

In this episode of Trekking Through Compliance, host Tom Fox explores the first appearance of the Romulans in the original Star Trek series episode ‘Balance of Terror.’ The Enterprise investigates attacks on Earth outposts near the Romulan Neutral Zone, uncovering themes of trust, loyalty, and the ethical dilemmas compliance officers face. The episode’s tension, akin to a World War II submarine movie, highlights the importance of principled decision-making, transparency, and balancing security and civil liberties. Key compliance lessons include the necessity for robust risk assessment, clear communication, and an understanding of diverse organizational cultures.

Key Highlights

  • The Enterprise’s Mission and Encounter
  • The Cat and Mouse Game
  • The Final Confrontation
  • Compliance Takeaways from Balance of Terror

Resources

Excruciatingly Detailed Plot Summary by Eric W. Weisstein

MissionLogPodcast.com

Memory Alpha

Categories
Blog

AI in Compliance Week: Part 4 – Keeping Your AI – Powered Decisions Fair and Unbiased

As artificial intelligence (AI) becomes increasingly integrated into business operations and decision-making, ensuring the fairness and lack of bias in these AI systems is paramount. This is especially critical for companies operating in highly regulated industries, where prejudice and discrimination can lead to significant legal, financial, and reputational consequences. Implementing AI responsibly requires a multifaceted approach beyond simply training the models on large datasets. Companies must proactively address the potential for bias at every stage of the AI lifecycle – from data collection and model development to deployment and ongoing monitoring.

Based upon what the Department of Justice said in the 2020 Evaluation of Corporate Compliance Programs, a corporate compliance function is the keeper of both Institutional Justice and Institutional Fairness in every organization. This will require compliance to be at your organization’s forefront of ensuring your AI-based decisions are fair and unbiased. What strategies does a Chief Compliance Officer (CCO) or compliance professional employ to help make sure your AI-powered decisions remain fair and unbiased?

The adage GIGO (garbage in, garbage out) applies equally to the data used to train AI models. If the underlying data contains inherent biases or lacks representation of particular demographic groups, the resulting models will inevitably reflect those biases. It would help if you made a concerted effort to collect training data that is diverse, representative, and inclusive. Audit your datasets for potential skews or imbalances and supplement them with additional data sources to address gaps. Regularly review your data collection and curation processes to identify and mitigate biases.

The composition of your AI development teams can also significantly impact the fairness and inclusiveness of the resulting systems. Bring together individuals with diverse backgrounds, experiences, and perspectives to participate in every stage of the AI lifecycle. A multidisciplinary team including domain experts, data scientists, ethicists, and end-users can help surface blind spots, challenge assumptions, and introduce alternative viewpoints. This diversity helps ensure your AI systems are designed with inclusivity and fairness in mind from the outset.

It would help if you employed comprehensive testing for bias, which is essential to identify and address issues before your AI systems are deployed. By Incorporating bias testing procedures into your model development lifecycle and then making iterative adjustments to address any problems identified. There are a variety of techniques and metrics a compliance professional can use to evaluate your models for potential biases:

  • Demographic Parity: Measure the differences in outcomes between demographic groups to ensure equal treatment.
  • Equal Opportunity: Assess the accurate favorable rates across groups to verify that the model’s ability to identify positive outcomes is balanced.
  • Disparate Impact: Calculate the ratio of selection rates for different groups to detect potential discrimination.
  • Calibration: Evaluate whether the model’s predicted probabilities align with actual outcomes consistently across groups.
  • Counterfactual Fairness: Assess whether the model’s decisions would change if an individual’s protected attributes were altered.

As AI systems become more complex and opaque, transparency and explainability become increasingly important, especially in regulated industries. (Matt Kelly and I discussed this topic on this week’s Compliance into the Weeds.) It would help if you worked to implement explainable AI techniques that provide interpretable insights into how your models arrive at their decisions. By making the decision-making process more visible and understandable, explainable AI can help you identify potential sources of bias, validate the fairness of your models, and ensure compliance with regulatory requirements around algorithmic accountability.

As Jonathan Marks continually reminds us, corporations rise and fall on their government models and how they operate in practice. Compliance professionals must cultivate a strong culture of AI governance within your organization, with clear policies, methods, and oversight mechanisms in place. This should include:

  • Executive-level Oversight: Ensure senior leadership is actively involved in setting your AI initiatives’ strategic direction and ethical priorities.
  • Cross-functional Governance Teams: Assemble diverse stakeholders, including domain experts, legal/compliance professionals, and community representatives, to provide guidance and decision-making on AI-related matters.
  • Auditing and Monitoring: Implement regular, independent audits of your AI systems to assess their ongoing performance, fairness, and compliance. Continuously monitor for any emerging issues or drift from your established standards.
  • Accountability Measures: Clearly define roles, responsibilities, and escalation procedures to address problems or concerns and empower teams to take corrective action.

By embedding these governance practices into your organizational DNA, you can foster a sense of shared responsibility and proactively manage the risks associated with AI-powered decision-making. As with all other areas of compliance, maintaining transparency and actively engaging with key stakeholders is essential for building trust and ensuring your AI initiatives align with societal values, your organization’s culture, and overall stakeholder expectations. A CCO and compliance function can do so through a variety of ways:

  • Regulatory Bodies: Stay abreast of evolving regulations and industry guidelines and collaborate with policymakers to help shape the frameworks governing the responsible use of AI.
  • Stakeholder Representatives: Seek input from diverse community groups, civil rights organizations, and other stakeholders to understand their concerns and incorporate their perspectives into your AI development and deployment processes.
  • End-users: Carsten Tams continually reminds us that it is all about the UX. A compliance professional in and around AI should engage with the employees and other groups directly impacted by your AI-powered decisions and incorporate their feedback to improve your systems’ fairness and user experience.

By embracing a spirit of transparency and collaboration, CCOs and compliance professionals will help your company navigate the complex ethical landscape of AI and position your organization as a trusted, responsible leader in your industry. Similar to the management of third parties, ensuring fairness and lack of bias in your AI-powered decisions is an ongoing process, not a one-time event. Your company should dedicate resources to continuously monitor the performance of your AI systems, identify any emerging issues or drift from your established standards, and make timely adjustments as needed. You must regularly review your fairness metrics, solicit feedback from stakeholders, and be prepared to retrain or fine-tune your models to maintain high levels of ethical and unbiased decision-making. Finally, fostering a culture of continuous improvement will help you stay ahead of the curve and demonstrate your commitment to responsible AI.

As AI is increasingly embedded in business operations, the stakes for ensuring fairness and mitigating bias have never been higher. By adopting a comprehensive, multifaceted approach to AI governance, your organization can harness this transformative technology’s power while upholding ethical and unbiased decision-making principles. The path to responsible AI may be complex, but the benefits – trust, compliance, and long-term sustainability – are worth the effort.

Categories
Innovation in Compliance

Innovation in Compliance: Lori Darley on Conscious Leadership

Innovation comes in many forms, and compliance professionals need to not only be ready for it but also embrace it.

In this episode, Tom Fox interviews Lori Darley, a former professional dancer and current leadership coach.

Lori shares her career evolution from dance to founding Conscious Leaders, a coaching firm specializing in leadership development. She discusses the principles of self-awareness, personal responsibility, and the clearing process, which are central to her coaching philosophy.

Lori also emphasizes the importance of intentional leadership in fostering a positive corporate culture and touches on her experience in the compliance arena. Additionally, she talks about her book, ‘Dancing Naked,’ which explores her journey and insights as a conscious leader.

Key Highlights:

  • Lori Darley’s Professional Journey
  • What is Conscious Leaders?
  • The Clearing Process Explained
  • Conscious Leaders Wisdom Circle
  • Impact on Corporate Culture
  • Generational Tensions and Coaching Benefits

Resources:

Lori Darley on  LinkedIn 

Conscious Leaders

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Blog

AI in Compliance Week: Part 2 – A Comprehensive Governance Approach

We continue our weeklong exploration of issues related to using Generative AI in compliance by examining some AI governance issues. In the rapidly evolving landscape of AI, the importance of robust governance frameworks cannot be overstated. The need for comprehensive governance structures to ensure compliance, ethical alignment, and trustworthiness has become paramount as AI systems become increasingly integrated into compliance. Today, we will consider the critical areas of compliance governance and ethics governance and present a holistic approach to mitigating the risks associated with these issues.

MIA AI Governance: The Problems

Missing compliance governance can have far-reaching consequences, undermining the integrity of an entire AI-driven initiative. Businesses must ensure alignment with enterprise-wide governance, compliance, and control (GRC) frameworks. This includes aligning with model risk management practices and embedding robust compliance checks throughout the AI model lifecycle. By promoting awareness of how the AI model works at your organization, you can minimize information asymmetries between development teams, users, and target audiences, fostering a culture of transparency and accountability.

The lack of ethical governance can lead to misalignment with an organization’s values, brand identity, or social responsibility. The answer is that companies should develop comprehensive AI ethics governance methods, including defining ethical principles, establishing an AI ethics review board, and creating a compliance program that addresses ethical concerns. Adopting frameworks like Ethically Aligned AI Design (EAAID) can help integrate ethical considerations into the design process while incorporating AI governance benchmarks beyond traditional measurements to encompass social and moral accountability.

Another outcome of the lack of trustworthy or responsible AI governance can result in unintentional and significant damage. To address this, compliance professionals should help develop accountable and trustworthy AI governance methods that augment enterprise-wide GRC structures. This can include establishing a committee such as an AI Advancement Council or similar structure in your company to oversee mission priorities and strategic AI advancement planning, collaborating with service line leaders and program offices to align with ethical AI guidelines and practices, and developing compliance programs to guide conformance with ethical AI principles and relevant legislation. Finally, implementing AI-independent verification and validation processes can help identify and manage unintentional outcomes.

The Solution

By addressing the critical areas of compliance governance and ethics governance through a more holistic approach, businesses can create a comprehensive framework that mitigates the risks associated with the absence of these crucial elements. This approach ensures that AI systems comply with relevant regulations and standards and align with your company’s values, ethical principles, and the pursuit of trustworthy and responsible AI. As the AI landscape evolves, this comprehensive governance framework will be essential in navigating the complexities and safeguarding the integrity of AI-driven initiatives.

Here are some key steps compliance professionals and businesses can think through to facilitate AI governance in your company:

  1. Establish a Centralized AI Governance Body:
    • Create an AI Governance Council that oversees your organization’s AI strategy, policies, and practices.
    • Ensure the council includes representatives from various stakeholder groups, such as legal, compliance, ethics, risk management, IT, and other subject matter experts.
    • Empower the council to develop and enforce AI governance frameworks, guidelines, and processes.
  2. Conduct AI Risk Assessments:
    • Identify and assess the risks associated with the organization’s AI initiatives, including compliance, ethical, and other compliance-related risks.
    • Prioritize the risks based on their potential impact and likelihood of occurrence.
    • Develop mitigation strategies and action plans to address the identified risks.
  3. Align AI Governance with Enterprise-wide Frameworks:
    • Ensure the AI governance framework is integrated with the organization’s existing GRC and Risk Management processes.
    • Establish clear lines of accountability and responsibility for AI-related activities across the organization.
    • Integrate AI governance into the organization’s broader risk management and compliance programs.
  4. Implement Compliance Governance Processes:
    • Develop and enforce AI-specific compliance controls, policies, and procedures.
    • Embed compliance checks throughout the AI model lifecycle, from development to deployment and monitoring.
    • Provide training and awareness programs to educate employees on AI compliance requirements.
  5. Establish Ethics Governance Mechanisms:
    • Define the organization’s AI ethics principles, values, and code of conduct.
    • Create an AI Ethics Review Board to assess and monitor the ethical implications of AI initiatives.
    • Implement processes for ethical AI design, such as the Ethically Aligned AI Design methodology.
    • Incorporate ethical AI benchmarks and accountability measures into the organization’s performance management and reporting processes.
  6. Implement Reliance-Related Governance:
    • Develop responsible and trustworthy AI governance practices that align with the organization’s enterprise-wide GRC frameworks.
    • Establish an AI Advancement Council to oversee strategic AI planning and alignment with ethical guidelines.
    • Implement AI-independent verification and validation processes to identify and manage unintended outcomes.
    • Provide comprehensive training and awareness programs on AI risk management for employees, contractors, and other stakeholders.
  7. Foster a Culture of AI Governance:
    • Promote a culture of accountability, transparency, and continuous improvement around AI governance.
    • Encourage cross-functional collaboration and communication to address AI-related challenges and opportunities.
    • Review and update the AI governance framework regularly to adapt to evolving regulatory requirements, technological advancements, and organizational needs.

By following these steps, organizations can implement a comprehensive governance framework that addresses compliance, ethics, and reliance-related governance. This framework enables organizations to harness the power of AI while mitigating the associated risks. 

AI Governance Resources

There are several notable resources the compliance professional can tap into around this issue of AI governance practices. The Partnership on AI Partnership on AI is a multi-stakeholder coalition of leading technology companies, academic institutions, and nonprofit organizations. It has been at the forefront of developing best practices and guidelines for the responsible development and deployment of AI systems. It has published influential reports and frameworks, such as the Tenets of Responsible AI and the Model Cards for Model Reporting, which have been widely adopted across the industry.

The Algorithmic Justice League (ALJ) is a nonprofit organization dedicated to raising awareness about AI’s social implications and advocating algorithmic justice. It has developed initiatives such as the Algorithmic Bias Bounty Program, encouraging researchers and developers to identify and report biases in AI systems. The AJL has highlighted the importance of addressing algorithmic bias and discrimination in AI.

IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems is a multidisciplinary effort to develop standards, guidelines, and best practices for the ethical design, development, and deployment of autonomous and intelligent systems. It has produced key documents and reports, such as the Ethically Aligned Design framework, which guides the incorporation of ethical considerations into AI development.

The AI Ethics & Governance Roundtable is an initiative led by the University of Cambridge’s Leverhulme Centre for the Future of Intelligence. It brings together industry, academia, and policymaking experts to discuss emerging issues, share best practices, and develop collaborative solutions for AI governance. The roundtable’s insights and recommendations have influenced AI governance frameworks and policies at the organizational and regulatory levels.

These examples demonstrate the power of industry collaboration in advancing AI governance practices. By pooling resources, expertise, and diverse perspectives, these initiatives have developed comprehensive frameworks, guidelines, and standards being adopted across the AI ecosystem. Compliance professionals should avail themselves of these resources to prepare your company to take the next brave steps in the intersection of compliance, governance, and AI.

Categories
Blog

The Intersection of Creativity and Compliance: Lessons from Improv

In the most recent episode of the Creativity and Compliance podcast, Tom Fox and Ronnie Feldman delved into the fascinating intersection of improvisation and compliance with our special guest, Marla Caceres, an expert in applied improvisation. We explored how the skills and philosophies of improv can significantly enhance communication and leadership within the ethics and compliance community.

Marla introduced improvisation as the theatrical art of making it up on the spot. While it may seem spontaneous, successful improvisation relies heavily on technique, training, and practice. Like a basketball team practices fundamentals to be ready for any game, improvisers hone their skills to perform seamlessly as a team. This ensemble-based approach fosters a collaborative environment where each member supports the other, creating a space where innovation and quick thinking thrive.

Improvisation is not confined to the theater; its principles apply to various business practices, particularly in ethics and compliance. Marla explains that many students are drawn to improv not to pursue comedy but to improve their communication and leadership skills. Improv teaches others-focused communication, essential for building effective teams and fostering a positive organizational culture.

Communication that is others-focused is at the heart of improvisation. This concept involves shifting your focus from your agenda to genuinely listening and responding to others. In an improv scene, success depends on fully accepting and building on your partner’s input. This active listening and validation level creates a supportive environment where creativity and collaboration flourish. Marla highlighted that this approach can transform everyday interactions, making them more productive and meaningful. It also plays directly into the skills needed by a compliance professional.

Psychological safety is paramount for ethics and compliance professionals. Psychological safety refers to an environment where individuals feel safe speaking up without fear of retribution. Improv provides a low-stakes, fun way to practice the skills necessary to foster this environment. By focusing on deep listening and the “Yes” principle, compliance professionals can build trust and encourage open communication.

The “Yes, and” principle is fundamental in improv. It involves accepting your partner’s idea (Yes) and building on it (and). This technique fosters creativity and promotes a nonjudgmental and inclusive atmosphere. For compliance professionals, applying “Yes and” can shift their perception of their role from rule enforcers to supportive advisors. This change in approach can make employees more willing to engage with compliance, seeing it as a collaborative effort rather than a hindrance.

Marla and Ronnie discussed several practical techniques derived from improv that can benefit compliance professionals. One such exercise is the “Should vs. Could” activity. Participants pair up and share a problem, with one offering advice using “You should” statements and then “You could” statements. The difference in reception is profound, with “You could” fostering a more collaborative and empowering dialogue. This simple shift in language can significantly impact how compliance professionals communicate, making their advice feel more supportive and less authoritative.

Improvisation also teaches the importance of building trust and reducing fear in communication. By practicing techniques emphasizing validation and support, compliance professionals can create an environment where employees feel safe to raise concerns and seek guidance. This trust is crucial for effective compliance, as it encourages proactive problem-solving and early reporting of potential issues.

The principles of improv can be applied in various settings within the compliance field. For instance, compliance training sessions can incorporate improv exercises to make learning more engaging and memorable. Additionally, compliance professionals can use these techniques in their day-to-day interactions to build stronger relationships with employees and leadership.

Marla emphasized that organizational culture and communication nuances trickle down from the top. Leaders play a critical role in modeling the behavior and communication styles they want to see throughout the organization. By incorporating improv techniques, leaders can demonstrate openness, active listening, and collaborative problem-solving, setting a positive example for their teams.

Improvisation offers a unique and practical approach to enhancing communication and leadership within the ethics and compliance community. By practicing others-focused communication, fostering psychological safety, and embracing the “Yes, and” principle, compliance professionals can transform their interactions and build a more supportive and proactive organizational culture. If you want to explore how improv can benefit your compliance efforts, consider incorporating these techniques into your training and daily practices. As Marla and Ronnie have shown, a little creativity can go a long way in making compliance a collaborative and engaging endeavor.

Categories
Great Women in Compliance

Great Women in Compliance: Beth Colling – Common Sense and Compliance

Welcome to the Great Women in Compliance podcast on the Compliance Podcast Network, sponsored by Corporate Compliance Insights.

In this episode, Lisa speaks with Beth Colling, Senior Vice President and Chief Compliance Officer at CDM Smith. Beth joined organizations after they had to address a significant regulatory change or investigation, and she worked to operationalize and then maintain a compliance program. Lisa and Beth specifically talk about how, as issues inevitably arise, compliance officers will get the resources they need to make and implement changes, but over time, memories fade, and the attention and resources may diminish. Beth provides her insight on this.

Beth uniquely evaluates her work and program by “firing herself” on Friday and re-hiring herself on Monday to examine it with new eyes. After the past several years, with the pandemic and hybrid work, this review became even more relevant. This leads to a discussion of “common sense,” not just within a compliance program but also in terms of personal responsibility and how employees rationalize bad behavior.

One of Beth’s (and Lisa’s) childhood heroes was “Wonder Woman,” and Beth may be Wonder Woman. Outside her work, she coaches young adults to enjoy running, and by the end of 2024, she will have completed 5 of the 6 “World Marathon Majors.”

You can join the LinkedIn podcast community.
Join the Great Women in Compliance podcast community here.

Categories
Compliance Into the Weeds

Compliance into the Weeds: Analyzing The Trump Conviction: Compliance Lessons from an Unprecedented Case

The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to more fully explore a subject.

Looking for some hard-hitting insights on compliance? Look no further than Compliance into the Weeds!

In this episode of ‘Compliance Into the Weeds’, Tom and Matt take a deep dive into last week’s trial verdict against Donald Trump in NYC and lessons for the compliance professional.

We explore the importance of internal controls, consistent consequence management, and effective leadership. They also delve into how compliance officers can learn from the storytelling strategies used in the trial and emphasize the application of the rule of law.

Key Highlights:

  • Overview of Trump’s Criminal Conviction
  • Internal Controls and Compliance Lessons
  • Consequences Management and Consistent Enforcement
  • Ethical Leadership and Communication
  • Who is your audience? Storytelling in Compliance
  • Final Thoughts and Rule of Law

Resources:

Matt on Radical Compliance

 Tom 

Instagram

Facebook

YouTube

Twitter

LinkedIn