Paris-based cybersecurity and digital identity firm Thales on Friday released the complete findings of its 2026 Digital Trust Index, expanding on summary data published earlier this week and calling on governments and industry bodies to accelerate binding AI governance standards before critical infrastructure gaps widen further.

The full report, distributed to security professionals and policymakers across Europe, North America, and Asia-Pacific, reveals that 67 percent of surveyed enterprises have deployed generative AI tools in at least one production environment, yet fewer than a third report having formal AI risk assessment processes in place. The gap is sharpest in financial services and healthcare, where regulatory obligations already strain IT security teams.

Thales Chief Technology Officer Todd Moore presented the findings at a briefing in London, warning that accelerated AI adoption without commensurate investment in identity verification and data sovereignty controls is creating systemic exposure. 'The trust infrastructure that underpinned the cloud transition simply does not exist yet for AI workloads,' Moore said. 'Organisations are deploying first and auditing later — that sequencing is dangerous.'

The report arrives as AI platform providers including Microsoft, Google, and a growing cohort of enterprise software vendors face mounting scrutiny over data handling in large language model deployments. Thales flagged that cross-border data flows embedded in AI inference pipelines are increasingly conflicting with GDPR in Europe and emerging data localisation rules in India and Brazil, creating compliance blind spots that legal teams are only beginning to map.

Industry analysts noted the timing aligns with ongoing EU AI Act implementation deadlines, and several EU member state data protection authorities indicated they would reference the Thales data in upcoming guidance documents. Thales said it plans to brief the European Data Protection Board and select US Congressional staff next week, as legislative interest in AI accountability standards accelerates on both sides of the Atlantic.