Categories
Innovation in Compliance

Innovation in Compliance: From Banking to AI: Tim Khamzin on Transforming Compliance

Innovation comes in many areas, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, host Tom Fox welcomes Tim Khamzin, Founder & CEO of Vivox AI, to discuss building explainable, trusted AI agents for financial crime compliance teams.

Tim describes his background in banking operations automation, including large-scale digital transformation and the development of compliance products, and explains how large language models since 2023–2024 enable the automation of unstructured compliance work without extensive model training. He outlines key challenges in AML/KYC operations—15% of bank headcount tied to compliance, heavy manual repetitive investigations across multiple systems, and cultural resistance to adopting technology.

Tim emphasizes “explainability” through consistent, repeatable investigations with audit logs and screenshots that mirror human workflows, and “trust” through transparency, compliant vendor choices, and clear communication of limitations. Tim introduces Vivox compliance analyst, “Rachel,” a platform of collaborating agents that supports onboarding, customer due diligence, and false-positive reduction, improved via structured human feedback (thumbs up/down) to learn firm-specific standards.
He explains how Vivox stays aligned with evolving regulations by engaging with bodies such as the UK FCA and tracking frameworks such as the EU AI Act and Singapore guidance, with a focus on auditability and explainability. Tim predicts most compliance work will shift to AI agents, with humans handling complex cases and a new role of “compliance engineer” emerging to configure and evaluate agents, alongside industry consolidation and operating-system-style vendor platforms.

Key highlights:

  • From Banking Automation to Founding Vivox AI: The Opportunity in LLMs
  • What’s Broken Today: Manual Investigations, Backlogs, and Culture Gaps
  • Explainable + Trusted AI: Audit Trails, Screenshots, and Transparency
  • Regulators’ Top AI Concerns: Black Box, Bias, and 99% Accuracy
  • Inside ‘Rachel’: The AI Compliance Analyst & Human-in-the-Loop Feedback
  • The Future: Compliance Engineers, Agent “Operating Systems,” and Consolidation

Resources:

Tim Khamzin on LinkedIn

Vivox AI

Innovation in Compliance was recently honored as the Number 4 podcast in Risk Management by 1,000,000 Podcasts.

Categories
AI Today in 5

AI Today in 5: February 23, 2026, The Bold But Balanced Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. How AI is transforming compliance in 2026. (FinTechGlobal)
  2. Asian banks are struggling to integrate AI into their compliance systems. (AsianBanking&Finance)
  3. A bold but balanced AI revolution. (CIO)
  4. Safely navigating chatbots and healthcare PII. (News-Medical)
  5. What is shaping AI governance? (ISEAS)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Blog

5 Strategic Board Playbooks for AI Risk (and a Bootcamp)

Artificial intelligence is no longer a future-state technology risk. It is a current-state governance issue. If AI is being deployed inside governance, risk, and compliance functions, then it is already shaping how your company detects misconduct, prioritizes investigations, manages regulatory obligations, and measures program effectiveness. That makes AI risk a board agenda item, not a management footnote.

In an innovation-forward organization, the goal is not to slow AI adoption. The goal is to professionalize it. Board of Directors and Chief Compliance Officers (CCOs) should approach AI the way they approached cybersecurity a decade ago: move it from “interesting updates” to a structured reporting cadence with measurable controls, clear accountability, and director education that raises the collective literacy of the room.

Today, we consider 5 strategic playbooks designed for a Board of Directors and a CCO operating in an industry-agnostic environment, building AI in-house, without a model registry yet, and with a cross-functional AI governance committee chaired and owned by Compliance. The program must also work across multiple regulatory regimes, including the DOJ Evaluation of Corporate Compliance Programs (ECCP), the EU AI Act, and a growing patchwork of state laws. We end with a proposal for a Board of Directors Boot Camp on their responsibilities to oversee AI in their organization.

Playbook 1: Put AI Risk on the Calendar, Not on the Wish List

If AI risk is always “important,” it becomes perpetually postponed. The first play is procedural: create a standing quarterly agenda item with a consistent structure.

Quarterly board agenda structure (20–30 minutes):

  1. What changed since last quarter? Items such as new use cases, material model changes, new regulations, and major control exceptions.
  2. AI full Risk Dashboard, with 8–10 board KPIs, trends, and thresholds.
  3. Top risks and mitigations, including three headline risks with actions, owners, and dates.
  4. Assurance and testing, which would include internal audit coverage, red-teaming results, and remediation progress.
  5. Decisions required include policy approvals, risk appetite adjustments, and resourcing.

This cadence does two things. First, it forces repeatability. Second, it creates institutional memory. Boards govern better when they can compare quarter-over-quarter progress, not when they receive one-off deep dives that cannot be benchmarked.

Playbook 2: Build the AI Governance Operating Model Around Compliance Ownership

In your design, Compliance owns AI governance and its use throughout the organization, supported by a cross-functional AI governance committee. That is a strong model, but only if it is explicit about responsibilities.

Three lines of accountability:

  • Compliance (Owner): policy, risk framework, controls, training, and board reporting.
  • AI Governance Committee (Integrator): cross-functional prioritization, approvals, escalation, and issue resolution.
  • Build Teams (Operators): documentation, testing, change control, and implementation evidence.

Boards should ask one simple question each quarter: Who is accountable for AI governance, and how do we know it is working? If the answer is “everyone,” then the real answer is “no one.” Your model makes the answer clear: Compliance owns it, and the committee operationalizes it.

Playbook 3: Create the AI Registry Before You Argue About Controls

You have no model registry yet. That is the first operational gap to close, because you cannot govern what you cannot inventory. In a GRC context, this is not a “nice to have.” Without an inventory, you cannot prove coverage, you cannot scope an audit, you cannot define reporting, and you cannot explain to regulators how you know where AI is influencing decisions.

Minimum viable AI registry fields (start simple):

  • Use case name and business owner;
  • Purpose and decision impact (advisory vs. automated);
  • Data sources and data sensitivity classification;
  • Model type and version, with change log;
  • Key risks (bias, privacy, explainability, security, reliability);
  • Controls mapped to the risk (testing, monitoring, approvals);
  • Deployment status (pilot, production, retired); and
  • Incident history and open issues.

Boards do not need the registry details. They need the coverage metric and the assurance that the registry is complete enough to support governance.

Playbook 4: Align to the ECCP, EU AI Act, and State Laws Without Creating a Paper Program

Many organizations make a predictable mistake: they respond to multiple frameworks by producing multiple binders. That creates activity, not effectiveness. A better approach is to use a single control architecture to map to multiple requirements. The board should see one integrated story:

  • DOJ ECCP lens: effectiveness, testing, continuous improvement, accountability, and resourcing;
  • EU AI Act lens: risk classification, transparency, human oversight, quality management, and post-market monitoring; and
  • State law lens: privacy, consumer protection concepts, discrimination prohibitions, and notice requirements where applicable

This mapping becomes powerful when it ties back to the board dashboard. The board is not there to read statutes. The board is there to govern outcomes.

Playbook 5: Use a Board Dashboard That Measures Coverage, Control Health, and Outcomes

You asked for a combined dashboard and narrative with 8–10 KPIs. Here is a board-level set designed for AI in governance, risk, and compliance functions, with in-house build, internal audit, and red teaming for assurance.

Board AI Governance KPIs (8–10)

1. AI Inventory Coverage Rate

Percentage of AI use cases captured in the registry versus estimated footprint.

2. Risk Classification Completion Rate

Percentage of registered use cases risk-classified (EU AI Act style tiers or internal tiers).

3. Pre-Deployment Review Pass Rate

Percentage of deployments that cleared required testing and approvals on first submission.

4. Model Change Control Compliance

Percentage of model changes executed with documented approvals, testing evidence, and rollback plans.

5. Explainability and Documentation Score

Percentage of in-scope use cases with complete documentation, rationale, and user guidance.

6. Monitoring Coverage

Percentage of production use cases with active monitoring for drift, anomalies, and performance degradation.

7. Issue Closure Velocity

Median days to close AI governance issues, by severity.

8. Internal Audit Coverage and Findings Trend

Number of audits completed, rating distribution, repeat findings, and remediation status.

9. Red Team Findings and Remediation Rate

Number of material vulnerabilities identified and percentage remediated within the target time.

10. Escalations and Incident Rate

Number of AI-related incidents or escalations (including near-misses), with severity and lessons learned.

These KPIs do not require vendor controls and align with an in-house build model. They also support both board oversight and compliance management.

AI Director Boot Camp

Your board has a medium level of literacy and needs a boot camp. I agree. Directors do not need to become engineers. They need a common vocabulary and a governance frame. The recommended boot camp design is one-half day, making it highly practical. It should include the following.

  1. AI in the company’s operating model. This means where it touches decisions, risk, and compliance outcomes.
  2. AI risk taxonomy, such as bias, privacy, security, explainability, reliability, third-party, and later.
  3. Regulatory landscape overview, including a variety of laws and regulatory approaches, including the DOJ ECCP approach to effectiveness, the EU AI Act risk framing, and several state law themes approaches.
  4. Governance model walkthrough to ensure the BOD understands the registry, risk classification, controls, monitoring, and escalation.
  5. Tabletop exercises, such as an AI incident in a GRC context with false negatives in monitoring or biased triage.
  6. Board oversight duties. Teach the BOD how they can meet their obligations, including which questions to ask quarterly, which thresholds trigger escalation, and similar insights.

The deliverable from the boot camp should be a one-page “Director AI Oversight Guide” with the KPIs, escalation triggers, and the quarterly agenda structure.

The Bottom Line for Boards and CCOs

This is the moment to treat AI risk like a board-governed discipline. The organizations that get it right will not be the ones with the longest AI policy. They will be the ones with the clearest operating model, the most reliable reporting cadence, and the strongest evidence of control effectiveness.

If Compliance owns AI governance, then Compliance must also own the proof. That proof is delivered through a registry, a quarterly board agenda item, a balanced KPI dashboard, and assurance through internal audit and red teaming. Add a director boot camp to create shared understanding, and you have the beginnings of a program that is innovation-forward and regulator-ready.

That is the strategic playbook: not fear, not hype, but governance.

Categories
Sunday Book Review

Sunday Book Review: February 22, 2026, The Top Books on Catastrophic Failure Edition

In the Sunday Book Review, Tom Fox considers books that would interest compliance professionals, business executives, or anyone curious. It could be books about business, compliance, history, leadership, current events, or anything else that might interest Tom. In this episode, we look at 4 top books on catastrophic failures and the lessons they teach.

  1. Catastrophe by Richard Posner
  2. Catastrophic Thinking by Ben Shapiro
  3. The Wisdom of Failure by Laurence G. Weinzimmer and Jim McConoughey
  4. Averting Catastrophe by Cass Sunstein
Categories
AI Today in 5

AI Today in 5: February 20, 2026, The Spinx Raises Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. AI compliance demands grow. (PlanAdviser)
  2. Compliance Monitoring: what works, what backfires. (UCToday)
  3. New AI governance tool. (PRNewsWire)
  4. The Spinx raises funds for new AI compliance agents. (FinTechGlobal)
  5. Boys will always be…just boys. (CNBC)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Daily Compliance News

Daily Compliance News: February 19, 2026, The Gambler Takes the Stand Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Connected workflows for compliance. (Spark)
  • Commit $600MM in fraud, no worries, just donate to Trump. (NYT)
  • Write the facts, get fired by Trump. (FT)
  • Tom Goldstein takes the stand in his criminal tax trial. (Reuters)
Categories
AI Today in 5

AI Today in 5: February 18, 2026, The AI for Rural Healthcare Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. AI to transform fraud investigations. (PRNewswire)
  2. Better defensible AI oversight. (PRNewswire)
  3. What’s in your compliance gap? (Forbes)
  4. Is the AI moment here? (FRSF)
  5. Oz wants AI avatars for rural healthcare. (NPR)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Compliance Into the Weeds

Compliance into the Weeds: Truth Stranger the Fiction: Binance, Iran, Crypto and Compliance

The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to explore it more fully. Looking for some hard-hitting insights on compliance? Look no further than Compliance into the Weeds! In this episode of Compliance into the Weeds, Tom Fox and Matt Kelly look at recent reporting on Binance that raises questions about the effectiveness of its compliance program, monitorships, and executive attitudes toward compliance.

They recap Binance’s 2023 resolution of U.S. criminal and civil matters involving money laundering and sanctions evasion. They discuss the Fortune article, which reported that Binance continued to route funds through its platform to the Iranian government in 2024 and into 2025. They highlight Mr. Zou’s public response on X, suggesting that if investigators found misconduct, it implied they failed to prevent it, which the hosts criticize as a misunderstanding that business units own risk and that compliance’s role is to provide systems, channels, oversight, and escalation rather than “prevent” all misconduct.

Key highlights:

  • Truth Stranger Than Fiction in Compliance
  • Binance’s 2023 Guilty Plea, $4.3B Penalty & Two Monitorships
  • Compliance Team Fallout: Investigators Fired & CCO on the Move
  • ‘If You Found It, You Failed’: Why CEOs Misunderstand Compliance
  • Iran as the Red Line: Plea Agreement Breach, Politics, and Corruption Risk
  • Will Anyone Enforce This? Rule of Law Questions and What Comes Next

Resources:

Matt in Radical Compliance

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

A multi-award-winning podcast, Compliance into the Weeds was most recently honored as one of the Top 25 Regulatory Compliance Podcasts, a Top 10 Business Law Podcast, and a Top 12 Risk Management Podcast. Compliance into the Weeds has been conferred a Davey, a Communicator Award, and a W3 Award, all for podcast excellence.

Categories
Great Women in Compliance

Great Women in Compliance: The New Architecture of Legal and Compliance with AI

In this episode of Great Women in Compliance, Dr. Hemma R. Lomax speaks with Sam Flynn, co-founder of Josef, about the transformation of legal and compliance functions through technology. They discuss the importance of human-centered design, the role of AI in legal architecture, and the need for trust in AI tools. Sam shares his journey from creating Myki Fines to building self-service legal solutions that bridge the access-to-justice gap. The conversation emphasizes the importance of user experience, governance practices, and the need to rethink traditional professional roles in the legal field.

Takeaways:

  • Legal and compliance functions must evolve to be more human-centered.
  • AI can significantly enhance legal decision-making processes.
  • Trust in technology is crucial for successful implementation.
  • User experience should be prioritized in legal tech solutions.
  • Automation can free up valuable time for legal professionals.
  • Access to justice is a critical issue that can be addressed with technology.
  • Rethinking traditional roles in law can lead to better outcomes.
  • Data-driven insights can improve compliance practices.
  • Collaboration between experts and end-users is essential for success.
  • Legal technology should focus on delivering real value to users.

Sound Bites:

  • “AI should unleash human potential.”
  • “Trust is the key to unlocking value.”
  • “We need to build trust in our technology.”

Chapters:

00:00 Introduction to Legal Transformation

02:32 The Journey of Sam Flynn and Mickey Finds

05:30 Rethinking Legal Systems and Design

08:10 Substance Over Form in Legal Processes

10:56 The Role of AI in Legal Architecture

13:39 Building a Legal Front Door

16:24 User Experience in Compliance

18:54 Engagement and Data Utilization

21:56 The Future of Legal Workflows

24:29 Deciding Between Automation and Human Input

26:56 Navigating High-Risk Inquiries

27:50 Strategic Automation for Stakeholder Engagement

28:58 The Importance of Human Expertise in AI

30:57 Transforming Fear into Opportunity with AI

32:59 Building Trustworthy AI in Legal Settings

36:56 Governance Practices for AI Deployment

43:51 Access to Justice: Bridging Gaps with Technology

Guest Biography:

Sam Flynn is the Co-Founder and Chief Operating Officer of Josef, a legal automation platform that empowers legal and compliance teams to create reliable, self-serve tools — no coding required. In his role, Sam leads Josef’s business operations, governance, marketing, and customer success functions, scaling both product impact and organizational trust.

An ex-BigLaw litigator and experienced legal technologist, Sam has long been passionate about using technology to bridge the access-to-justice gap and elevate the delivery of legal services. In 2016, he built Myki Fines, a public-facing legal tech solution that attracted more than 60,000 users in its first month and helped catalyze reforms to unfair laws.

At Josef, Sam combines legal expertise with product and operational leadership to help teams rethink how legal and compliance work gets done — shifting from inbox-driven bottlenecks to strategic architectures that deliver decision-useful guidance at scale. He is a frequent speaker on generative AI in legal, a board member of the Center for Legal Innovation, and an advocate for human-centered legal design.

Categories
Blog

AI and Work Intensification – The Compliance Response

There is a comforting myth circulating in corporate hallways and boardrooms: if we deploy AI across governance, risk, and compliance, the work will shrink. Investigations will move faster. Monitoring will get smarter. Policies will draft themselves. Third-party diligence will become push-button. The compliance function will finally “do more with less.” That myth was challenged in a recent Harvard Business Review article, “AI Doesn’t Reduce Work—It Intensifies It by Aruna Ranganathan and Xingqi Maggie Ye.

The authors believe that what happens is work intensification. AI expands throughput, increases expectations, and generates more outputs that still require human judgment, verification, and accountability. Instead of fewer tasks, you get more tasks. Instead of simpler work, you get faster cycles, more iterations, and new forms of quality risk. For the Chief Compliance Officer (CCO) leading AI governance, this is not a side effect. It is a core operating model issue.

If compliance owns AI governance across the enterprise, compliance must also own the discipline of how humans and AI work together. I call that discipline an AI practice standard, management guidance that sets expectations for pace, quality, verification, escalation, and sustainable workload.

Today, we consider how to consider this issue as a compliance operating model challenge across all GRC workflows: policy management, investigations, hotline intake, monitoring and surveillance, third-party due diligence, regulatory change management, audit planning, training, and reporting. The tone is cautionary because the risk is real: a compliance function that mistakes AI output volume for compliance effectiveness.

The Compliance Operating Model Problem: More Output, More Review, More Risk

Compliance work is not manufacturing. It is judgment work. It requires discretion, context, and defensible decisions. AI can accelerate inputs and draft outputs, but it does not accept responsibility. The CCO does. The business does. The board does. When AI enters GRC workflows, it tends to create four pressure points:

1. Compression of timelines. If a draft can be produced in five minutes, someone will ask why it cannot be finalized in five more.

2. Explosion of options. AI generates multiple versions, scenarios, and recommendations, which expands decision load and review cycles.

3. Higher volume of “signals.” AI-enabled monitoring produces more alerts, more pattern matches, and more anomalies. Much will be noise. All require triage.

4. Illusion of completion. Teams begin to treat a plausible AI answer as a finished work product. That is how quality defects are born.

The result is a compliance function that looks “faster” while becoming more fragile. Burnout rises. Rework increases. Errors creep into documentation. Controls become less reliable because the humans operating them are overwhelmed by the sheer volume AI makes possible.

All this means the question for the CCO is not, “How do we roll out AI?” The question is, “How do we govern the human work that AI intensifies?”

Five KPIs for Work Intensification Risk

Next, we consider five KPIs specifically designed to measure work intensification. These are board-credible, compliance-owned, and operationally measurable.

1. After-Hours Compliance Work Index

Percentage of compliance work activity occurring outside standard business hours (for example, 6 p.m. to 7 a.m.), measured across key systems (case management, GRC platform activity logs, email metadata, collaboration tool usage). This matters because AI compresses timelines and pushes work into nights and weekends. This index serves as an early warning for burnout and quality failures.

2. AI Rework Rate

Percentage of AI-assisted work products requiring material revision after human review (policies, investigation summaries, risk narratives, diligence reports). This matters because

if AI increases speed but doubles rework, you are not gaining productivity. You are shifting effort downstream.

3. Cycle Time Compression vs. Quality Defect Ratio

Track cycle time reductions alongside quality defects (corrections, escalations, documentation gaps, audit findings). You can express this KPI as Cycle Time Improvement / Defect Increase.

This matters because faster is not better if defects rise. This ratio keeps leadership honest.

4. Alert-to-Action Conversion Rate

Percentage of AI-generated alerts that result in a confirmed issue, investigation, remediation, or control enhancement. This matters because AI intensifies monitoring. This KPI exposes whether you are drowning in noise or generating actionable intelligence.

5. Burnout Signal Composite

A quarterly composite score built from pulse surveys such as fatigue, workload, autonomy, attrition in compliance roles, sick leave usage trends, and employee assistance program utilization patterns. This matters because compliance effectiveness depends on people. Burnout is a control failure risk.

These five metrics give the CCO and board a shared view of whether AI is improving the compliance function or simply accelerating it toward exhaustion.

How to Measure the Leading Indicators

You requested practical recommendations for measuring after-hours work, cycle time, quality defects, and burnout indicators. Here is a measurement approach that is realistic and defensible.

After-Hours Work

  • Use system log data from the case management, GRC, and document management platforms to track timestamped activity.
  • Supplement with email and collaboration metadata to measure volume outside standard hours.
  • Report trends by team and workflow, not individuals. This is about operating model health, not surveillance.

Cycle Time

  • Establish “start” and “stop” definitions for each workflow:
    • Investigations: intake date to closure date
    • Due diligence: request date to clearance date
    • Policy updates: drafting starts from the published version
    • Regulatory change: trigger identification to implementation
  • Track AI-assisted versus non-AI-assisted cycle times to isolate the impact.

Quality Defects

  • Define defects as “items requiring material correction after initial completion,” including:
    • Incomplete documentation
    • Wrong risk rating or missing rationale
    • Incorrect regulatory mapping
    • Reopened cases due to insufficient analysis
    • Audit findings tied to workflow execution
  • Capture defects through QA sampling, supervisor review logs, audit results, and post-incident reviews.

Burnout Indicators

  • Run a quarterly pulse survey with 5–7 questions on workload, pace, clarity, and ability to disconnect.
  • Track voluntary attrition and vacancy duration for compliance roles.
  • Include aggregate HR indicators such as overtime trends or sick leave usage, where available.
  • Use a composite score and trend it. The trend line is what matters.

The key is to build instrumentation without creating a culture of monitoring employees. Your goal is not to watch people. Your goal is to protect the control environment.

Adopt an Enterprise AI Practice Standard Now

For an innovation-forward company, the right move is not to slow down. The right move is to govern how you speed up. Your call to action is simple and strong: to adopt an enterprise AI practice standard as management guidance, owned by Compliance, implemented across all GRC workflows, measured by five work-intensification KPIs, and tested by internal audit and red teaming.

If you do that, you gain three things immediately:

1. A sustainable operating model

2. Defensible governance for regulators and boards

3. A compliance function that remains credible under pressure

AI can make compliance better. But only if the humans who run compliance can still breathe.