Innovation occurs across many areas, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, host Tom visits with George Tziahanas, VP of Compliance and Associate General Counsel at Archive360.
Tom interviews George Tziahanas on why organizations must move beyond data storage to providing data integrity, lineage, and accountability as a foundation for AI readiness. George defines “data defensibility” as the ability to defend how AI systems were trained and operate when AI decisions are not easily explainable, such as in rules-based automation, emphasizing upstream data provenance, monitoring, and audit trails. They discuss increasing regulator and stakeholder focus on authority and accountability and how litigation can shape compliance, citing early e-discovery practices influenced by the Zubulake v. UBS Warburg decision and enforcement context involving former New York AG Elliot Spitzer. George uses the Mercor breach to show supply-chain and confidentiality risks in AI training data and notes that regulators and plaintiffs may rely on existing laws. He highlights risks from weak data governance, dark data, and legacy archives. He recommends asset/data inventories, migrating data off insecure legacy systems, risk-tiering AI use cases, extending ISO/NIST frameworks, and building observability to enable faster, responsible AI adoption.
Key highlights:
- What Data Defensibility Means
- Litigation Shapes Compliance
- Weak Data Governance Risks
- Managing Legacy Archive Data
- Governance Accelerates AI
- Dark Data Explained
- What Success Looks Like
Resources:
George Tziahanas on LinkedIn
Articles by George Tziahanas
Beyond Retention: Why AI Governance in 2026 is a Defensibility Problem