One thing I have learned from the digital age is that to stay ahead, we must stay informed and proactive about how new technologies impact corporate governance, ethics, and operational compliance. In this context, generative AI (Gen AI) is no longer a futuristic concept; it is embedded deeply in our everyday activities. Marc Zao-Sanders’ article in Harvard Business Review (HBR), “How People Are Really Using Gen AI in 2025,” presents an excellent opportunity to reflect on how these developments impact compliance, governance, and risk management.
Zao-Sanders highlights a critical shift in how generative AI is utilized: from purely technical assistance towards significantly more personal and emotive applications. With “Therapy/Companionship,” “Organizing my life,” and “Finding purpose” emerging as the top three use cases, it’s clear that users seek emotional and organizational support, demonstrating Gen AI’s versatility beyond traditional technological roles.
Compliance professionals must recognize that as AI increasingly becomes integral to both professional services and personal well-being, the accompanying risk and compliance implications magnify exponentially. The nature of these interactions, often intimate or deeply personal, demands robust data privacy protections and stringent ethical governance frameworks. Businesses integrating these technologies need precise, transparent policies and effective oversight mechanisms to mitigate new compliance risks.
Implications for Compliance Professionals
Enhanced Data Privacy and Ethical Considerations
Zao-Sanders emphasizes the rising prominence of personal and professional support through Gen AI, especially in areas such as AI-based therapy, emotional companionship, and life organization. As users entrust AI with highly sensitive personal data, compliance professionals face increased responsibilities regarding data privacy, security, and the ethical use of data. This scenario elevates the stakes considerably. He notes, “data safety is not a concern when your health is deteriorating,” highlighting users’ willingness to sacrifice privacy for crucial emotional or medical support. Such conditions can quickly lead to ethical and compliance vulnerabilities if businesses fail to manage and protect sensitive user data rigorously.
Organizations must reinforce their compliance strategies to manage ethical risks inherent in AI-human interactions. As Zao-Sanders indicates, professional services, including medical, legal, and financial advisement, are increasingly relying on generative AI, pushing regulatory boundaries. Notably, EY’s deployment of 150 AI agents specifically for tax-related tasks highlights the profound impact of generative AI on professional services, adding layers of complexity to compliance strategies.
Regulatory Response and Enforcement Trends
The article briefly touches on the growing regulatory scrutiny that Gen AI is attracting globally, noting explicitly that governments are “taking more emphatic and explicit positions” due to heightened stakes surrounding AI technology. For compliance professionals, this should serve as a clarion call: regulatory oversight is intensifying. Preparing for audits, demonstrating compliance, and actively engaging with regulatory developments will be essential. The rapid pace of AI adoption necessitates an agile and proactive approach to compliance management that anticipates, rather than merely reacts to, regulatory shifts.
Balancing AI Dependence with Human Oversight
A striking tension highlighted in the article is the debate over the impact of generative AI on human cognitive abilities, decision-making, and ethical judgment. Users express genuine concern about becoming overly reliant on AI, which could erode their ability to think critically and make independent, ethical decisions.
This reliance poses significant implications for compliance officers charged with safeguarding ethical decision-making. Effective compliance programs must emphasize human oversight, cultivating a culture where AI supports rather than supplants human judgment. Investing in AI literacy among employees can mitigate potential over-reliance, fostering an environment where staff understand both the capabilities and limitations of AI.
Compliance in AI-Driven Professional Services
Zao-Sanders illustrates how AI integration into professional tasks is increasingly sophisticated. For instance, the transformation underway at EY, training employees extensively in generative AI, reflects broader industry trends. Compliance officers must respond to these developments by establishing clear standards and compliance checkpoints. It is crucial to determine whether AI outputs meet professional standards, remain unbiased, and do not inadvertently violate regulatory obligations.
Given AI’s pervasive integration into professional judgments (such as tax preparation, legal advice, and medical diagnosis), the accuracy and regulatory compliance of AI-driven outputs become paramount. Compliance programs must integrate AI auditability, accountability, and transparency deeply into corporate governance frameworks.
Practical Compliance Steps in the Gen AI Era
1. Proactive Policy Development and Training
Develop clear policies that outline the acceptable use of generative AI, including specific guidelines on data handling, ethical considerations, and regulatory obligations. Embed these policies into your organization’s culture through rigorous training and communication strategies.
2. Rigorous Risk Assessment and Ongoing Monitoring
Gen AI compliance must adopt continuous monitoring. Regular risk assessments and periodic audits of AI systems will promptly detect and rectify issues. Compliance officers should remain actively involved in assessing new AI technologies for ethical, privacy, and regulatory considerations before full-scale implementation.
3. Transparent Data Practices
Given the heightened public sensitivity to data privacy concerns, as noted by Zao-Sanders’ mention of users’ concerns around data privacy and their cynicism toward Big Tech, companies must prioritize transparent data practices. Clear communication about data usage, consent, and protection measures will foster trust and reduce compliance risks.
4. Ethical AI Governance Frameworks
Design and deploy ethical AI governance frameworks that address algorithmic fairness, transparency, and accountability, ensuring responsible use of AI. These frameworks ensure generative AI tools are deployed responsibly and ethically, aligning with stakeholder expectations and regulatory standards.
5. Encourage Human-AI Collaboration
Foster a balanced approach between AI-driven solutions and human judgment. Reinforce the importance of human oversight to ensure compliance, accuracy, and ethical decision-making, thus minimizing over-dependence on AI.
Looking Ahead—The Compliance Imperative in the Gen AI Landscape
As we approach a future increasingly defined by AI integration, compliance professionals have a unique opportunity to lead their organizations proactively. Understanding and managing the compliance and ethical dimensions of Gen AI is now critical, not optional. The risks and opportunities outlined in Zao-Sanders’ article underscore the urgent need for a strategic, well-informed approach to integrating generative AI into corporate compliance frameworks.
Compliance professionals should view this moment as an opportunity to demonstrate thought leadership, to guide ethical AI adoption, and to establish robust frameworks that enable businesses to thrive responsibly. By proactively addressing the compliance and moral challenges presented by generative AI, we not only fulfill our professional obligations but also position our organizations as ethical, forward-thinking leaders in the digital age. The compliance journey ahead is demanding, but equally, it offers profound opportunities to influence and shape a responsible, compliant, and ethically robust AI-driven future.