As organisations race to deploy AI tools, a quiet revolution is happening behind the scenes – and it’s catching many employers off guard. Welcome to the world of Shadow AI.
What Is Shadow AI?
Shadow AI refers to the unauthorised or unmonitored use of AI tools by employees in the workplace: tools like ChatGPT, Claude, or Gemini, without the oversight, approval, or governance of their employer.
What’s Happening on the Ground?
In customer service, complaints handling, HR, and other client-facing teams, it’s increasingly common for employees to:
- Paste entire customer emails or complaint letters into AI tools,
- Ask the model to rewrite or improve the content,
- Then send the result back to the customer – faster and more polished than before.
The appeal is obvious. AI improves speed, clarity, and tone – and for employees working in a second language, it’s a productivity lifeline.
But here’s the catch:
In their rush to get results, many employees are unknowingly exposing large amounts of personal data – including special category data – to third-party AI providers.
Names, health information, contact details, account history – all copied and pasted directly into external systems that sit outside the employer’s secure network and privacy safeguards.
The Legal Reality: Are These Data Breaches?
Yes – in many cases, this constitutes a personal data breach under the GDPR, because it’s an unauthorised disclosure of personal data to a third party (often outside the EEA), usually with no lawful basis, data processing agreement, or technical safeguards in place.
This is not a hypothetical issue. It’s happening hundreds of times a day in some organisations – and many don’t realise it.
And things may be getting worse.
Recent Developments: Data Retention by AI Providers
Following a recent court order in New York, OpenAI confirmed that it will retain logs of user interactions, including data users may believe has been deleted. Users should be aware that once data is inputted, they cannot truly “take it back”. This is particularly concerning when employees have inputted sensitive personal information or confidential company information into the system.
What Can Employers Do?
Blocking AI tools outright is an option but this may simply drive employees to use less secure alternatives or even send data to a non-work email and access the AI tool from their personal computer – constituting further data breaches and increased risk.
Instead, consider a structured AI governance strategy that includes the following:
AI Acceptable Use Policy
Establish clear, written guidelines that set out:
- When and how AI tools can be used,
- Which tools are permitted (and which are not),
- What data may never be input into third-party systems.
- Ensure that use of the system is made conditional upon the employee always checking 100% of the output.
- Employees are educated against the risks of AI exceptionalism.
Ensure staff are trained on the risks of hallucinations to avoid a culture of AI exceptionalism.
Use Work-Sanctioned AI Tools
If employees are going to use AI, give them a secure, approved way to do it. Tools like Microsoft Copilot, integrated into MS Teams and Office 365, offer AI functionality within your enterprise environment – keeping data inside your ecosystem and under your control.
Final Thought
Shadow AI isn’t going away. Employees are embracing AI tools because they offer real, tangible benefits. But without proper governance, these tools can create systemic legal and security risks – quietly, and at scale.
By combining education, clear policies, approved tools, and data governance, businesses can gain the benefits of AI – without exposing themselves to avoidable regulatory danger.