AI soared. We’re steering. Spring 2025 Product Launch is here!
February 4
1pm ET / 10am PT
01
Days
01
Hours
01
Minutes
01
Seconds
Save the Date
Back to Blog
4/23/2025
-
XX
Minute Read

AI Usage at Work Is Exploding — But 71% of Tools Put Your Data at Risk

Cameron Coles
Cameron Coles
Guest Contributor
VP of Marketing

As AI becomes deeply integrated into critical business operations and adopted by increasing numbers of departments and employees, the volume and sensitivity of data flowing into these systems has grown exponentially. Companies now face a dual challenge: harnessing AI's potential while managing the substantial data risks it introduces.

The majority of current AI usage falls under what’s called "shadow AI" – the use of AI tools unsanctioned by corporate IT departments. For forward-thinking organizations, a significant opportunity exists in understanding and leveraging this grassroots AI usage. By identifying how employees are successfully using AI, companies can strategically implement these tools and methodologies on a broader scale, capturing their benefits enterprise-wide.

However, the risks to corporate data cannot be overlooked. Many AI tools incorporate user-provided data into their training models, potentially exposing sensitive information. This characteristic, among other risk factors, indicates that the majority of AI tools currently used in workplaces present significant data security risks. As organizations enable AI adoption, they must also implement robust guardrails to protect their most sensitive information assets.

This comprehensive analysis from Cyberhaven Labs draws on actual AI usage patterns of 7 million workers, providing an unprecedented view into the adoption patterns and security implications of AI in the corporate environment.

AI usage in the workplace is growing exponentially

AI usage at work continues its remarkable growth trajectory. In the past 12 months alone, usage has increased 4.6x, and over the past 24 months, AI usage has grown an astounding 61x. This represents one of the fastest adoption rates for any workplace technology, substantially outpacing even SaaS adoption, which took years to achieve similar penetration levels.

The competitive landscape has shifted significantly since last year. While ChatGPT maintains its dominant position as the most-used AI tool in the workplace, Claude has made a dramatic climb from eighth position to second place, and Microsoft Copilot has risen from seventh to third. These shifts reflect the market's consolidation around tools offering enterprise-grade capabilities and security features.

Specialized tools are gaining ground as the market matures. Cursor's appearance at fourth position – notably ahead of Github Copilot at eighth – signals a significant shift in developer preferences for AI-assisted coding platforms. The rapid adoption of these specialized tools suggests the AI market is entering a new phase of specialization after the initial dominance of general-purpose platforms.

AI adoption increased in every industry

The past year has witnessed AI adoption extending deeper across all industry sectors, with notable growth in previously underrepresented verticals. Manufacturing and retail organizations, which had been AI adoption laggards, experienced the most dramatic growth rates, with manufacturing companies seeing 20x growth in employee AI adoption and retail firms achieving an even more impressive 24x increase.

Technology companies still lead with 38.9% of employees using AI tools, but retail organizations have surged to second place with 26.4% of employees now regularly using AI tools. Financial services (26.2%), professional services (17.2%), and healthcare (11.8%) round out the top five industries by adoption.

The data suggests AI adoption is following a pattern similar to previous technology waves – beginning in tech-forward sectors before rapidly diffusing to more traditional industries once clear use cases and value propositions emerge. What distinguishes the AI revolution, however, is the unprecedented speed of this cross-sector adoption.

{{ promo }}

Most AI tools in use in the workplace are high risk

Cyberhaven's comprehensive risk assessment of over 700 AI tools reveals substantial concerns about the current AI ecosystem. A troubling 71.7% of tools fall into high or critical risk categories, with just 11% qualifying for low or very low risk classifications.

Key risk factors include inadvertent exposure of user interactions and training data (present in 39.5% of AI tools) and user data being accessible to third parties without adequate controls (found in 34.4% of tools). These vulnerabilities create substantial data exfiltration risks that organizations must address as AI adoption accelerates.

Most concerning is that 83.8% of enterprise data input into AI tools flows to platforms classified as medium, high, or critical risk – with just 16.2% destined for enterprise-ready, low-risk alternatives. This imbalance highlights the urgent need for organizations to implement more robust AI governance controls.

DeepSeek usage surged and then plateaued

The January 2025 release of DeepSeek's R1 model generated significant attention in the AI community. End user engagement with DeepSeek through its web interface surged dramatically in the initial three weeks post-release, but this growth wasn't sustained. Usage plateaued by the end of the first seven weeks, settling at 672.8% growth relative to pre-release baselines.

For context, the growth of DeepSeek following the R1 release substantially outpaced other major AI model releases. Gemini usage increased 171.9% in the seven weeks following its 2.0 release, while Claude usage rose 136.1% after version 3.5 launched.

Within software development projects, open-source AI models have gained significant traction over the past year. Llama has established a dominant position, consistently accounting for at least 50% of local model development over the past twelve months as developers build custom AI applications and services.

However, the January 2025 release of DeepSeek R1 disrupted the market. Developer adoption of DeepSeek surged rapidly, reaching 17.7% of AI development activity by February – firmly establishing it as the second-most utilized model behind Llama. However, this initial enthusiasm partially subsided by March 2025, with usage settling at 11.0% of developer activity.

An increasing percentage of corporate data going to AI is sensitive

As AI moves from experimental to operational status within organizations, we're witnessing a concerning trend in the sensitivity of data being processed by these systems. Currently, 34.8% of all corporate data that employees input into AI tools is classified as sensitive – a substantial increase from 27.4% a year ago and more than triple the 10.7% observed two years ago.

Examining the specific categories of sensitive data reveals concerning patterns. The most common types of sensitive data employees put into AI are source code (18.7% of sensitive data) and R&D materials (17.1%), highlighting AI's growing role in product development processes. Sales and marketing data constitutes another 10.7%, including marketing plans and confidential data about customers.

Perhaps most alarming are the healthcare-related findings. Health records comprise 7.4% of sensitive data going into AI, such as when medical professionals use AI to draft communications with insurers or summarize patient visits. Similarly, HR and employee records account for 4.8% of sensitive data, with AI increasingly used to draft performance reviews and handle confidential personnel matters.

How AI-generated content is used at work

Understanding how employees use AI-generated content represents an opportunity for organizations seeking to scale AI's benefits. By identifying successful usage patterns in early adopters, companies can strategically extend these approaches across the organization.

Our analysis shows that 35.9% of AI-generated content flows into email and messaging platforms, making communication the dominant use case. Cloud documents receive 18.0% of AI-generated content, spanning everything from summarizing strategic planning documents to formulas used in spreadsheet calculations.

Technical use cases show promising adoption, with 10.8% of AI material entering source code management systems. IT and security functions are embracing AI-generated content as well, with 5.5% of outputs appearing in infrastructure and security tools – typically as automation scripts and configuration templates.

AI adoption is highest among younger, mid-level employees

AI adoption follows distinct patterns across organizational hierarchies, with mid-level employees emerging as the most enthusiastic adopters. Analysts, specialists, and similar mid-tier roles use AI tools 3.5 times more frequently than the next-highest cohort (manager-level employees), suggesting a sweet spot where employees have both the autonomy to adopt new tools and the practical knowledge needed to apply these tools to increase their own productivity.

This pattern holds true within technical teams as well. Among software engineers, mid-level professionals (typically Senior Software Engineers) demonstrate the highest AI usage rates, outpacing both entry-level developers and higher-seniority Staff Software Engineering leaders by a significant margin. Mid-level software engineers use AI 189% more than their more junior counterparts.

Software engineers are leveraging AI coding tools more than ever

Software development represents one of the most transformative areas of AI adoption in the enterprise. While developers typically begin their AI journey through grassroots experimentation, the impact becomes more significant once organizations formally support these tools.

When companies officially deploy specialized AI development environments like Cursor or Cline, usage grows by 400% in the first four months after rollout, quickly becoming integral to development processes. This formal adoption reshapes established development workflows, with traditional integrated development environments (IDEs) such as VS Code, Xcode, and PyCharm experiencing a 23.7% decline in usage when AI alternatives become available.

Take Action: Discover Your Organization's AI Usage and Risk

If your organization is embracing AI's productivity benefits while wrestling with the associated security challenges, understanding your actual usage patterns is the first step toward effective governance.

To discover how your employee base is adopting AI and how your data flows to and from AI tools, request a personalized AI usage audit from Cyberhaven.

Download the complete 2025 AI Adoption and Risk Report for detailed findings, industry benchmarks, and strategic recommendations for securing your organization in the age of AI.

Cyberhaven Labs report
Download the full 2025 AI Adoption & Risk Report
Download now
Web page
Read our Cyberhaven for Gen AI overview
Learn more