Shadow IT and Shadow AI: The Invisible Risks Threatening Your SME
Introduction: When Your Everyday Tools Become an Invisible Risk
Your employees use ChatGPT to write emails, DeepL to translate documents, or their personal Dropbox to share files. These practices seem harmless, but they represent one of the most underestimated cybersecurity risks for SMEs: Shadow IT and its new variant, Shadow AI.
Key definition: Shadow IT refers to all software, applications, and cloud services used by employees without the authorization or supervision of the IT department. Shadow AI is its extension: the unregulated use of generative artificial intelligence tools (ChatGPT, Gemini, DeepL…) in a professional context.
The reality is simple: your company data circulates on tools that you do not control, whose existence you are sometimes unaware of, and which escape any security policy. For a Swiss SME subject to the nLPD and concerned about its reputation, this situation represents a major vulnerability.
In this article, you will discover what Shadow IT and Shadow AI really are, what concrete risks they pose to your business, and above all how to regain control without hindering the productivity of your teams.
What is Shadow IT and How Has it Evolved?
Shadow IT has grown considerably with the democratization of the cloud and the explosion of SaaS applications. WhatsApp for communicating with customers, personal Google Drive for storing documents, management applications found on the Internet: all common examples in Swiss SMEs.
Your employees are not trying to circumvent the rules out of malice. They simply want to be more efficient. If the official tool is slow or unsuitable, they will naturally look for a more practical solution elsewhere.
In 2025, this phenomenon took on a new dimension with the massive arrival of generative artificial intelligence. We now speak of Shadow AI. ChatGPT, Claude, Gemini, DeepL and dozens of other AI tools are used daily to write reports, summarize meetings, translate documents or analyze data. These assistants boost productivity, but their unregulated use creates a data leak on an unprecedented scale.
The Hidden Risks of Generative AI for Your SME
Your Data Feeds Third-Party AI Models
Most free AI services operate on a simple model: you use the tool for free, but your data is used to train and improve artificial intelligence models.
When your CFO copy-pastes your projected balance sheet into ChatGPT to get a summary, this confidential data leaves your company's secure environment. It is transmitted to servers often located in the United States, then integrated into the AI's knowledge base. There is a risk that this sensitive information could be returned by the AI in response to a question from another user, including a competitor.
nLPD and Cloud Act: A Real Double Legal Risk
Beyond the loss of intellectual property, this practice poses concrete legal problems for Swiss SMEs:
- The nLPD (revised Federal Act on Data Protection, in force since September 2023) imposes strict rules on the transfer of personal data abroad. By using public AI tools without contractual guarantees, your company is potentially violating the law.
- The American Cloud Act (2018) allows American judicial authorities, upon presentation of a warrant, to demand access to data held by American companies such as OpenAI (ChatGPT), Google or Microsoft, even when this data is physically stored in Europe. This law applies extraterritorially and the persons concerned are not necessarily informed of this access.
Even if the Swiss-U.S. Data Privacy Framework (which came into force in September 2024) better regulates these transfers for certified American companies, the Cloud Act remains applicable and represents a real risk for sensitive data.
| Risk | Source | Potential impact for your SME |
|---|---|---|
| Leak of confidential data | Public generative AI (ChatGPT, Gemini…) | Loss of intellectual property, violation of customer contracts |
| nLPD non-compliance | Transfer of personal data outside Switzerland | Sanctions from the Federal Data Protection and Information Commissioner (FDPIC), fines, damage to reputation |
| Access by foreign authorities | American Cloud Act | Exposure of strategic data to unauthorized third parties |
| Loss of file control | Personal Dropbox/Google Drive | Data inaccessible when an employee leaves |
Concrete Scenario: A Swiss HR SME Exposed
A Swiss SME active in HR consulting. A consultant uses ChatGPT to summarize evaluation interviews. He copies notes containing names, comments on performance and salary data. This information ends up in the hands of an unauthorized third party, with all the risks: damage to reputation, lawsuits, loss of the client, sanctions from the Federal Data Protection and Information Commissioner. Applying the principle of least privilege could have considerably limited the exposure.
Why Banning is Not the Solution
Faced with these risks, wanting to purely and simply ban these tools is understandable, but rarely effective. A total ban pushes practices underground. Your employees will continue to use these tools by hiding it, preventing you from measuring and managing the risks.
If your teams use ChatGPT, it is because they derive a real benefit from it: saving time, producing better quality content, focusing on tasks with higher value. The recommended approach is support and governance.
Simple Steps Your Employees Can Take Today
If your teams use public AI tools and you cannot immediately provide them with secure alternatives, here are the practices that considerably reduce the risks:
- Systematic anonymization — Before copying text into ChatGPT, delete or replace all sensitive information: people's names, phone numbers, emails, addresses, names of client companies, project names, precise financial amounts.
- Use generic aliases — Instead of “Dupont SA Company”, write “Client A”. Instead of a confidential project name, “Project X”. This habit requires minimal effort but considerably protects your data.
- Never submit trade secrets, critical source codes, passwords, API keys or any information giving access to your systems. This data must never leave your secure environment.
How to Regain Control as a Leader
Your role is not only to raise awareness, but also to provide the tools to work effectively while respecting security rules.
Secure AI Solutions to Replace Public ChatGPT
The first step is to offer Swiss or European alternatives to public AI tools:
- Euria from Infomaniak — Sovereign AI assistant entirely hosted in Switzerland. Your data never leaves Infomaniak's Swiss datacenters and is never used to train AI models. Offers an ephemeral mode for ultra-sensitive data (no trace kept, even by Infomaniak). Free in its basic version, compliant with the nLPD, with functionalities similar to ChatGPT (writing, translation, document analysis, audio transcription).
- ChatGPT Enterprise / Microsoft Copilot for Microsoft 365 — For companies already in the Microsoft ecosystem or preferring American solutions with contractual guarantees: your data is not used to train the models, remains confidential and is processed in accordance with regulations. Centralized access management included.
- Mistral AI — French company present in Switzerland since December 2024. Offers “Le Chat Enterprise”, deployable on public or private cloud, with a sovereign approach compliant with the GDPR.
These solutions have a cost, but it must be balanced against the risks of a data leak. Losing the trust of a major client can have much more serious consequences than investing in secure tools.
Regain Control Over File Storage and Sharing
Shadow IT also concerns WeTransfer, personal Dropbox, personal Google Drive. These solutions escape your control: no visibility on access, no centralized backup, often no multi-factor authentication. If an employee leaves the company, the files leave with him.
Clear rule: company data must reside exclusively on company tools.
| Need | Public solution (to avoid) | Recommended sovereign alternative | Key advantage |
|---|---|---|---|
| Storage and collaboration | Personal Dropbox / Google Drive | kDrive (Infomaniak) | Swiss hosting, up to 106 TB, nLPD compliant |
| Ultra-confidential data | — | Proton Drive | Zero-access encryption, Swiss jurisdiction, outside Cloud Act |
| Large file transfer | WeTransfer | SwissTransfer | Free, up to 50 GB, AES-256 encryption, Swiss hosting |
| Professional backup | — | Swiss Backup (Infomaniak) | Anti-ransomware, replication on 2 Swiss datacenters, AES-256 |
| Generative AI | Free ChatGPT / Gemini | Euria (Infomaniak) | Data never used for training, ephemeral mode available |
Implement a Clear Policy and Support Change
Formalize your usage policy in a document accessible to everyone. This charter must explain which tools are authorized, which are prohibited, and why. It specifies best practices and the consequences of non-compliance.
But a written policy is not enough. Organize training sessions explaining the risks with real examples adapted to your sector. If your employees understand why these rules exist, they will be more inclined to respect them. A security audit adapted to your SME can help you identify risk areas and prioritize your actions.
Establish a continuous dialogue. Encourage your teams to report the tools they would like to use. Evaluate these requests and find secure alternatives. This collaborative approach transforms security from a constraint into a shared approach.
Frequently Asked Questions About Shadow IT and Shadow AI
What exactly is Shadow IT?
Shadow IT refers to all software, applications and cloud services used by employees in a professional context without the IT department being aware of it or having given its authorization. Common examples: WhatsApp for communicating with customers, personal Dropbox for storing company files, or ChatGPT for writing professional documents.
What is Shadow AI?
Shadow AI is an extension of Shadow IT specific to generative artificial intelligence tools. It refers to the unregulated use of public AI assistants (ChatGPT, Gemini, Claude, DeepL…) in a professional context, without a security policy or contractual guarantees on the processing of submitted data.
Why is Shadow AI particularly risky for a Swiss SME?
Because the data entered into free AI tools can be used to train the models, is stored on servers often located in the United States (exposing your company to the Cloud Act), and its transfer may constitute a violation of the nLPD if it contains personal data. A leak of customer data can lead to legal sanctions, loss of contracts and lasting damage to your reputation.
What is the best Swiss alternative to ChatGPT for an SME?
Euria from Infomaniak is the most complete sovereign alternative for Swiss SMEs: exclusive hosting in Switzerland, data never used to train the models, ephemeral mode for ultra-sensitive data, and functionalities equivalent to ChatGPT (writing, translation, document analysis). The basic version is free.
Should ChatGPT be banned for employees?
A total ban is rarely effective: it pushes practices underground without reducing the risks. The recommended approach is governance: raising awareness among teams about the risks, providing secure alternatives adapted to their needs, and formalizing a clear usage policy with anonymization rules for cases where no immediate alternative exists.
Conclusion: Security and Productivity are Not Incompatible
Shadow IT and Shadow AI will not disappear. They reflect a reality: your employees are looking to be efficient. Your role is not to curb this dynamic, but to channel it in a secure manner.
By understanding the risks, raising awareness among your teams, providing adapted professional tools and establishing clear governance, you transform an invisible threat into a competitive advantage.
Two concrete actions right now:
- Evaluate which AI and storage tools are actually used in your company, then identify the professional versions or Swiss alternatives that can replace them.
- Communicate clearly with your teams and support them in this transition. To go further in awareness, PhishTrainer can help you train your employees on cyber risks in an interactive way.
The cybersecurity of your Swiss SME also depends on this. Bexxo can support you in this governance and security approach.
