Shadow IT Policy Template
Copy-Ready Policy Language for Your Organization
Nine sections of professional policy language, ready to adapt. Includes AI-specific provisions, amnesty mechanisms, and industry addenda for HIPAA, GDPR, and SOC 2.
1. Purpose and Scope
This policy establishes the requirements for software, cloud services, and artificial intelligence tools used within [COMPANY NAME]. It applies to all employees, contractors, and third-party personnel who access [COMPANY NAME] systems, data, or networks. The purpose of this policy is to ensure that all technology tools used for business purposes are assessed for security, compliance, and data protection risks before adoption. This policy does not prohibit the use of new tools. It requires that tools are evaluated through a risk-appropriate process before use with [COMPANY NAME] data.
Customization: Replace [COMPANY NAME] with your organization name. Adjust scope to include or exclude specific groups (e.g., acquired entities, subsidiary companies).
2. Definitions
Shadow IT: Any software, cloud service, hardware device, or AI tool used for business purposes that has not been approved through [COMPANY NAME]'s software procurement process. Authorized Software: Applications listed in the [COMPANY NAME] Approved Software Catalog, accessible at [CATALOG URL]. Shadow AI: Any artificial intelligence tool, including generative AI chatbots, coding assistants, image generators, and AI-powered browser extensions, used for business purposes without IT department approval. Sensitive Data: Data classified as Confidential or Regulated under [COMPANY NAME]'s Data Classification Policy, including but not limited to: personally identifiable information (PII), protected health information (ePHI), payment card data, intellectual property, and trade secrets.
Customization: Add your data classification levels. Link to your existing data classification policy. Add your approved software catalog URL.
3. Acceptable Use Requirements
3.1 Employees must use Authorized Software for all business activities involving [COMPANY NAME] data. 3.2 Employees must not enter Sensitive Data into any tool that is not listed in the Approved Software Catalog. 3.3 Employees must not create accounts on SaaS platforms using their [COMPANY NAME] email address without prior approval. 3.4 Employees must not install browser extensions, desktop applications, or mobile applications that access [COMPANY NAME] data without prior approval. 3.5 Employees must not use personal accounts (e.g., personal ChatGPT, personal Google Drive) for [COMPANY NAME] business activities. 3.6 Free-tier SaaS tools are subject to the same approval requirements as paid tools if they process [COMPANY NAME] data.
Customization: Adjust acceptable use clauses based on your risk tolerance. Some organizations allow free-tier tools for non-sensitive data with notification only.
4. Risk-Tiered Approval Process
All new software requests are classified into three risk tiers with corresponding approval processes: Tier 1 (Low Risk): Tools in the pre-approved catalog that do not process Sensitive Data. Approval: automatic. Timeline: immediate. Tier 2 (Medium Risk): New tools that process internal (non-sensitive) data or require SaaS account creation. Approval: IT review including DPA assessment and SSO compatibility check. Timeline: 5 business days. Tier 3 (High Risk): Tools that process Sensitive Data, Regulated Data, or require elevated access privileges. Approval: full security assessment, legal review, and DPA execution. Timeline: 15 business days. Requests are submitted via [REQUEST FORM URL]. IT commits to the stated timelines for each tier.
Customization: Adjust tier definitions and timelines to match your procurement process. The key principle: Tier 1 must be instant, Tier 2 must be under a week.
5. AI-Specific Provisions
5.1 All AI tools, including generative AI chatbots, coding assistants, image generators, and AI-powered productivity tools, are subject to this policy regardless of whether they are free, browser-based, or require no installation. 5.2 Employees must not enter Sensitive Data, proprietary code, trade secrets, customer data, or internal strategy documents into any AI tool that is not in the Approved Software Catalog. 5.3 AI-generated content used in customer-facing communications, legal documents, financial reports, or regulatory filings must be reviewed by a qualified human before publication or submission. 5.4 [COMPANY NAME] maintains an approved AI tool list as part of the Approved Software Catalog. Enterprise-licensed AI tools with data processing agreements, SSO integration, and audit logging are available for the following categories: [LIST CATEGORIES]. 5.5 To support EU AI Act compliance (effective August 2, 2026), all AI tool usage must be documented. Employees must not deploy AI tools for decision-making that affects individuals without documented human oversight.
Customization: List your approved AI tools by category. Adjust EU AI Act provisions based on your EU presence and AI risk exposure.
6. Amnesty and Self-Reporting
6.1 [COMPANY NAME] operates a rolling amnesty program for shadow IT self-reporting. Employees who voluntarily disclose unauthorized tools through the self-reporting form at [FORM URL] will not face disciplinary action. 6.2 Self-reported tools will be evaluated through the standard risk-tiered approval process. Tools that pass assessment will be added to the Approved Software Catalog. Tools that fail assessment will be migrated to approved alternatives with IT support. 6.3 The amnesty program is permanent and ongoing. It is not limited to a specific period. 6.4 Amnesty does not apply to tools that have already caused a data breach or compliance violation at the time of reporting.
Customization: Create a simple self-reporting form. Consider quarterly amnesty campaigns with reminders. Some organizations add incentives (recognition, swag) for self-reporting.
7. Exceptions Process
7.1 Exceptions to this policy may be granted for legitimate business needs where no approved alternative exists. 7.2 Exception requests require: business justification, risk assessment, data classification of information to be processed, proposed compensating controls, and a defined review date. 7.3 Exceptions are approved by the IT Security team and reviewed at the next quarterly governance review. 7.4 Exceptions are time-limited (maximum 90 days) and must be renewed if the need persists. 7.5 All active exceptions are documented in the shadow IT governance dashboard.
Customization: Adjust exception authority (IT Security, CISO, VP of IT). Adjust time limits based on your risk tolerance.
8. Enforcement and Consequences
Enforcement follows a graduated approach: First occurrence: Education. Employee is informed of the policy, the specific risk, and directed to approved alternatives. No disciplinary action. Second occurrence: Formal warning. Manager is notified. Employee acknowledges the policy in writing. IT monitors for continued usage. Third occurrence: Escalation to HR for disciplinary review consistent with [COMPANY NAME]'s disciplinary policy. Immediate escalation: Unauthorized tools processing Regulated Data (ePHI, PCI data) or creating an active compliance violation are blocked immediately regardless of occurrence count.
Customization: Align enforcement with your existing HR disciplinary framework. Adjust the escalation ladder based on your organization's culture.
9. Review Cadence
9.1 This policy is reviewed and updated annually by the IT Governance team. 9.2 The Approved Software Catalog is reviewed quarterly to add new approved tools and retire deprecated ones. 9.3 Shadow IT governance metrics are reported to leadership quarterly, including: total shadow apps discovered, shadow IT spend, compliance exposure, and governance maturity score. 9.4 Policy amendments are communicated to all employees within 30 days of approval. 9.5 Next scheduled review: [DATE].
Customization: Set a specific annual review date. Quarterly catalog reviews should align with your compliance calendar.
Industry-Specific Addenda
HIPAA Addendum
- a)All software processing ePHI must have an executed Business Associate Agreement (BAA) on file before any patient data is entered.
- b)Shadow apps discovered processing ePHI without a BAA trigger an immediate breach risk assessment per the HIPAA Breach Notification Rule (45 CFR 164.400-414).
- c)AI tools must not be used to process, summarize, or generate content containing ePHI unless the tool has an executed BAA and meets HIPAA security requirements.
- d)Mobile device shadow apps are subject to the same ePHI controls as desktop applications.
GDPR Addendum
- a)All software processing EU personal data must have a signed Data Processing Agreement (DPA) compliant with GDPR Article 28.
- b)Shadow apps processing EU personal data are unauthorized data processors and must be reported to the Data Protection Officer (DPO) immediately upon discovery.
- c)Data residency for all approved software must be documented. EU personal data may only be processed in jurisdictions with adequate data protection (GDPR Chapter V).
- d)AI tools processing EU personal data require a Data Protection Impact Assessment (DPIA) per GDPR Article 35.
SOC 2 Addendum
- a)All software with access to customer data must be documented in the system inventory used for SOC 2 audit evidence.
- b)Shadow apps discovered during SOC 2 audit preparation must be remediated (approved or removed) before the audit window opens.
- c)OAuth grants from the organizational IdP to unauthorized applications must be reviewed and revoked as part of quarterly access reviews.
- d)Shadow IT discovery is included as a detective control in the SOC 2 Trust Services Criteria (CC6.1, CC6.6).
Frequently Asked Questions
What should a shadow IT policy include?▾
A comprehensive shadow IT policy includes 9 sections: purpose and scope, definitions, acceptable use requirements, risk-tiered approval process, AI-specific provisions, amnesty and self-reporting, exceptions process, enforcement and consequences, and review cadence. Industry-specific addenda should be added for HIPAA, GDPR, or SOC 2 as applicable.
Who owns the shadow IT policy?▾
The shadow IT policy is typically owned by the CISO or VP of IT, with input from IT Security, Compliance/Legal, HR, and Finance. The IT Governance team manages day-to-day policy operations including the approved catalog and approval process.
How often should a shadow IT policy be reviewed?▾
The policy itself should be reviewed annually. The Approved Software Catalog should be reviewed quarterly. Shadow IT governance metrics should be reported to leadership quarterly. Policy amendments should be communicated to all employees within 30 days.
Should shadow IT policy include an amnesty clause?▾
Yes. Amnesty is critical for honest disclosure. Without amnesty, shadow IT goes further underground. A permanent, rolling amnesty program for self-reported tools maximizes the data available for governance decisions while maintaining employee trust.
How should shadow AI be addressed in the policy?▾
AI-specific provisions should cover: all AI tools are subject to the policy regardless of whether they are free or browser-based, sensitive data must not be entered into unapproved AI tools, AI-generated content for external use requires human review, approved enterprise AI alternatives must be listed, and EU AI Act documentation requirements must be met.
What enforcement approach works best for shadow IT?▾
Graduated enforcement: first occurrence is education (no penalty), second is formal warning with manager notification, third is HR escalation. Immediate blocking is reserved for tools processing regulated data. This approach balances security with employee trust and encourages self-reporting.