←
Back to Blog
Security best practices
3/28/2025
-
XX
Minute Read
Managing shadow AI: best practices for enterprise security
The rush to work faster with artificial intelligence (AI) risks encouraging employees to accidentally put sensitive data at risk. Take this scenario: someone in the procurement team has a tight deadline, so they upload a confidential contract into an AI tool to review a few redlines. It’s unclear if the AI system is storing the data from the contract, how long it’ll be retained, and if the data will resurface in a future prompt to someone else. There was no malicious intent here, but there’s no visibility into what happened or will happen to the data and a lack of controls on compliant usage of AI tools. This isn’t just an issue with one department—it’s happening throughout organizations. Employees are using AI tools in the shadows, leaving companies with little control over their data. In this blog, we’ll explore how to manage data exfiltration risks when dealing with unsanctioned AI tools.
What is shadow AI?
Shadow AI refers to the unsanctioned use of AI tools by employees within an organization. These tools often fall outside the purview of security and IT teams, meaning they’re not vetted for compliance, security, or data privacy standards. It could be anything from a ChatGPT-powered email assistant to AI-driven task management apps—which security teams hope are being used with caution, but can’t really be sure without the proper monitoring and controls in place.
CIOs are increasingly acknowledging the risks of shadow AI. When asked in Gartner's peer community, “What are you doing to prevent shadow AI practices?” one common response was: “When we do find an instance of shadow AI use, we try to take appropriate action. Still, it's fair to say that we don't have a good handle on the shadow use of AI within the organization. It's a real concern, but I don't know that we have a way to detect, stop, or control it.”
What’s driving the rise of shadow AI?
It all ties back to the mentality that "if you want to keep up, you have to use AI." And the workforce has certainly indicated that it wants to keep up. Adoption of AI tools has grown at a remarkable pace. According to McKinsey, 72% of organizations have adopted AI tools in 2024, up from around 50% in past years.
How are different departments using generative AI technologies?
The mass adoption of AI tools suggests that different teams have found value in leveraging them all across the company. In a Forbes business survey, 73% of respondents use or plan to use AI-powered chatbots while 61% use AI for emails. Even in just the financial services industry, there are many use cases for shadow AI:
Software development
Developers use AI tools to spin up boilerplate APIs in minutes instead of hours. They’re feeding it error logs to pinpoint bugs faster, cutting debugging times in half. Using AI tools is also very useful for writing documentation, an exercise some teams may prefer to stay away from.
Marketing
Marketing teams use AI to write content quickly such as social media blurbs and descriptions about financial products. It can be used to help with rephrasing sentences that align to the customer’s language, saving teams time and improving personalization.
Customer service
AI-powered chatbots can be the first line of contact between customers and a company. The AI solutions can answer questions that customers may have about bank fees, transfer limits, and minimum account balances, shortening the time it takes for customers to get an answer and reducing the number of inquiries from customer support.
Loan underwriting
AI tools can help with evaluating loan applications and credit approvals by assessing an applicant’s creditworthiness. It can quickly scan financial indicators, credit scoring models, and market trends to come up with a summary that allows banks to make their decisions with greater confidence and speed.
Fraud prevention
Bank security teams can use AI to spot suspicious activity, block unauthorized transactions, and prevent fraud. Ever get a call asking, “Did you just spend $1,352 at the Apple Store?” Increasingly, that’s AI working behind the scenes to verify if the purchase was really yours.
What are the security risks of shadow AI?
As more and more teams adopt AI tools to improve their output, security leaders must stay ahead to minimize security risks. Some of these risks include:
Sensitive data loss
It’s not uncommon to hear of employees using personal AI accounts to analyze company data, completely bypassing corporate safeguards. In fact, 11% of data employees paste into ChatGPT is considered confidential. Whether it’s intentional or not, this opens the door for data leaks that could have catastrophic consequences.
Lack of visibility and enforcement
Another issue stems from the lack of visibility into what data is considered sensitive and what data is flowing out of the organization and into AI tools. The lack of accurate data classification means that organizations aren’t able to recognize when data is leaving the organization. The second issue here, assuming that accurate classification has been applied, is the lack of monitoring and enforcement. Even if “customer account information” in a spreadsheet is labeled as confidential, do organizations have monitoring in place to catch when the contents of the file are copied into an AI tool? Then if they were to catch it, are they about to block the action or trigger a warning to the end user? For many security teams, the answer is “probably not.”
Backdoor vulnerabilities
Developers turn to AI tools for quick code generation, but the outputs aren’t always safe. AI coding assistants like GitHub Copilot don’t understand code—they just mimic patterns they’ve seen before. That means if there’s a security flaw in the training data or even in my own codebase, the AI will reproduce software vulnerabilities and introduce risks without the developers even knowing about it. The AI-generated code can contain subtle vulnerabilities—or worse, intentional backdoors—that compromise security.
How can security teams mitigate shadow AI risks?
Monitor AI usage
Given the increase in AI adoption, security teams should develop capabilities to monitor AI usage. For example, some companies sanction the use of ChatGPT, but most are probably not aware of instances when confidential data are copied into the prompts. This is a visibility gap that security teams must close.
Qualify AI tools
The business will continue to adopt AI tools to improve productivity. Therefore, security teams need to align with business needs and their requirements to continue innovating. Instead of blocking new AI services, security teams can work with the business to vet, onboard, and add AI tools to the list of sanctioned and managed technologies.
Govern and manage AI usage
Security teams should govern AI use with controls such as RBAC (role-based access controls) and establish well-defined guidelines for using those applications. Depending on your role, you may be granted access to certain tools. Developers may use developer tools like Github Copilot but must ensure that a code review process is in place for all AI-generated code.
Let’s use AI, responsibly
Eliminating shadow AI just isn’t feasible. Employees will always find ways to use the tools they believe make them more productive, whether they’re sanctioned or not. As security leaders, it’s our responsibility to ensure that our organizations can adopt AI safely and responsibly. When security teams work with the business to enable AI rather than block it, we create an environment where teams can move faster while minimizing security risks.