Shadow AI Discovery
Shadow AI refers to AI tools adopted by employees without the knowledge or approval of the organization. Research suggests that around 60% of employees use unapproved AI tools at work — from ChatGPT and Copilot to image generators and AI-powered analytics. Organizations cannot govern AI systems they do not know about.
What is Shadow AI?
Shadow AI is the AI equivalent of Shadow IT — the use of AI-powered tools, services, and applications by employees without explicit organizational authorization. Unlike formally procured vendors that go through risk assessment and contract review, shadow AI tools enter the organization through individual adoption: an engineer installs a code assistant, a lawyer uses ChatGPT for contract review, a marketing team signs up for an AI copywriting tool.
Under the EU AI Act, organizations that deploy AI systems have specific obligations as deployers — regardless of whether the deployment was sanctioned or not. Shadow AI creates blind spots in compliance, data protection, and risk management programs.
Why it matters
Compliance risk
Unregistered AI systems may fall under high-risk categories without appropriate conformity assessments, human oversight, or risk documentation.
Data protection
Employees may input personal data, trade secrets, or client information into AI tools that train on user inputs or store data in non-compliant jurisdictions.
Regulatory exposure
Under the EU AI Act, deployers are responsible for AI systems used within their organization — even those adopted informally by staff.
Governance gap
You cannot classify risk, run assessments, or set oversight gates for AI systems you do not know exist. Shadow AI is the missing first step.
Shadow AI vs Vendor Risk
Shadow AI Discovery and Vendor Risk Management solve fundamentally different problems in opposite directions. They are complementary, not overlapping.
Vendor Risk
- Organization procures a vendor through formal channels
- Known from day one — the org chose the vendor
- Core question: “Is this vendor we selected risky?”
- Lifecycle: Procurement → Assessment → Monitoring
Shadow AI
- Employees adopt AI tools on their own, outside procurement
- Unknown until reported or discovered
- Core question: “What AI tools are people actually using?”
- Lifecycle: Discovery → Triage → Approve or Prohibit
How it works
Employees self-report
Staff search a pre-loaded catalog of 36 known AI tools across 8 categories (LLM Chat, Code Assistants, Image Generation, Video/Audio, Writing, Business Tools, Data Analytics, Search) or enter a custom tool. They specify their department, usage description, and how they use the tool.
Triage and review
Reported tools enter the review queue with status DISCOVERED. The AI Officer or governance team reviews each tool, assessing risk indicators: Does it process personal data? Does it train on input? Is it cloud-hosted? SOC2 certified? GDPR compliant?
Approve or prohibit
Each tool is either APPROVED for continued use (with conditions) or PROHIBITED and flagged for removal. Prohibited tools trigger a clear organizational signal that the tool is not sanctioned.
Promote to formal governance
Approved shadow tools can be promoted into the AI Registry as a formal AI System — and optionally into a Vendor record — in a single step. This bridges informal discovery into your full governance program.
Status Workflow
Reported by employee
Being assessed by governance team
Sanctioned for use
Banned — must be removed
Promoted to AI Registry