According to a 2025 report from the global data security firm Varonis, AI is accelerating risk faster than it’s driving progress as adoption outpaces security measures.1 The company’s study of 1,000 organizations around the globe and across industries yielded some alarming statistics about data security in the era of AI.
The results underscore the need for greater protections for individuals and organizations alike, particularly as the use of AI by both good and bad actors becomes increasingly mainstream.
Here’s a look at some of the vulnerabilities identified by Varonis, as well as some ways to mitigate the new threats.
99%
of the organizations studied had sensitive data dangerously exposed to AI tools.
98%
had unverified apps installed. Among these were unsanctioned GenAI apps (aka “Shadow AI”)—tools that can bypass corporate governance and IT oversight.
- 1,200: Average number of unofficial apps per company.
- 52% of employees used high-risk open authorization (OAuth) apps.
- Even after users log out, OAuth apps still have permission to access sensitive data.
90%
of organizations had sensitive files exposed to all employees via M365 Copilot.
- On average, 25,000+ sensitive folders were exposed to all employees.
- 6% of organizations had sensitive files open to the internet.
- Only 10% of companies had accurately categorized, managed and protected files from AI misuse, even though this process (“labelling”) is key to data protection.
90%
of organizations had sensitive data exposed.
66%
had cloud data exposed to anonymous users.
- More organizations are training their own AI processes and products, which means more data—including sensitive data—is at risk.
- If training data sits in multiple clouds, it can be difficult to manage permissions.
- Bad actors can hack training data to poison training models, making breaches harder to detect.
88%
had stale but enabled “ghost users” (accounts of former employees or contractors). Averages per organization:
- 15,000 ghost users;
- 176,000 inactive external identities;
- 10 stale users with admin roles; and
- >31,000 stale permissions.
>87%
of organizations had sensitive data exposed to every user.
>14%
of organizations did not use or enforce multifactor authentication (MFA) across their software platforms and multi-cloud environments. On average, organizations had:
- 1,800 users with non-expiring passwords; and
- 5 global admins with non-expiring passwords.
How to protect your organization
Some tips from Varonis to help prevent AI-driven data breaches and exposures:
1 Be proactive in detecting and addressing threats.
2 Continuously monitor your data and your AI copilots, chatbots, and agents.
3 Lock down permissions and access to prevent identity based attacks. Varonis found that many organizations were lagging in this area, especially when securing non-human identities like APIs (application programming interfaces) and service accounts.
4 Use AI and automation tools to accurately label your data. Automated and continuous labelling helps prevent data loss, enforce encryption, and support compliance with regulatory standards.
5 Enable and enforce data encryption. While MFA isn’t foolproof, it can make it significantly harder for hackers to gain access. Encryption also helps to protect any data used in a company’s AI model training processes.
6 Use AI tools to catch abnormal and malicious behaviour.
For more on this study, visit Varonis’ website.
This article was originally published in the January/February 2026 issue of CPABC in Focus.
Footnote
1 Varonis, 2025 State of Data Security Report: Quantifying AI’s Impact on Data Risk, varonis.com/blog/state-of-data-security-report.