Asher Security
Artificial Intelligence Policy
When we talk about AI adoption inside organizations, it’s tempting to think only about opportunity — faster insights, automation, and innovation. However, with this growth comes heightened risk.
From unreliable outputs to escalating data privacy risks, the dangers are high. For this reason, many CISOs and security teams have found themselves racing to catch up with the rapid adoption of AI. The need for a comprehensive AI security policy is no longer optional — it’s essential. Without clear guidelines and controls, organizations face vulnerabilities that could compromise data integrity, erode customer trust, and cause lasting financial and reputational damage.
Looking to define artificial intelligence and its subcategories?
Download Our Artificial Intelligence Governance Playbook
Why You Need an AI Policy
There are over 330 million organizations globally using AI and the global AI market is expected to reach $4.8 trillion by 2033.
The cost of data breach in the US is also expected to rise to $10.22 million according to IBM research.
Clearly, as the adoption of these AI models increases, so does risk and these risks are expensive.
While 49% of these organizations plan to invest in security, a concerning 13% have already reported that these breaches are tied AI models or applications. What’s even more alarming is that IBM reports a 63% share of companies’ don’t have an AI governance policy leaving employees are experimenting with powerful tools without clear guardrails.
This lack of structure creates risk on multiple levels:
- Data Privacy & Ownership: Employees may unknowingly upload sensitive or restricted information into public AI tools. Once data is shared, it may be used for training, stored indefinitely, or even exposed in ways that compromise confidentiality.
- Regulatory & Compliance Gaps: Industries regulated by GDPR, HIPAA, or PCI DSS can face serious violations if personal or financial data is shared with unvetted AI platforms. Fines and penalties for non-compliance can be devastating.
- Reputation & Trust: A single incident — such as customer data leaked via an open-source AI tool — can erode trust among clients, investors, and partners, undoing years of brand credibility.
- Shadow IT Expansion: Without clear guidance, employees often adopt their own AI tools (“shadow AI”), leading to inconsistent practices and creating blind spots for security teams.
- Operational Risk: Different teams using different AI tools without standardization leads to inefficiency, inconsistent outputs, and confusion about which data sources are reliable.
In short: AI without policy is AI without accountability.
An AI policy closes this gap by doing three things:
- Sets Clear Boundaries: Employees know what platforms are approved, what data can be used, and where the red lines are.
- Protects Sensitive Data: By enforcing licensed AI use, the organization maintains ownership and control of its data.
- Balances Innovation with Security: Instead of stifling AI use, policies enable safe adoption by aligning technology with risk appetite and compliance obligations.
Without these guardrails, organizations may find that the very tools meant to accelerate growth become the source of their biggest vulnerabilities.
Asher Security — Artificial Intelligence Policy
Below is an example of an AI policy. It is design to guide companies to develop their own policies tailored to their specific needs.
1. Purpose
The purpose of the Asher Security AI Policy is to enable safe, productive use of artificial intelligence while protecting the company and client data. The policy achieves this by requiring a formal vetting process for AI use that confirms a clear business case, ensures alignment to company AI principles and values, and treats AI vendors as third parties subject to security review and contractual protections.
The policy’s ultimate goals are to prevent unauthorized ingestion of company data into AI platforms, preserve the ability to remove company data from third-party services, and ensure licenses preserve your data ownership.
2. Scope (who this applies to)
This policy applies to all employees and contractors of your company who view, collect, edit, communicate, or present data on behalf of the company or as part of client engagements. In short: any person using information in their duties must follow these rules.
Core Policy Statements (what is required)
- Approved Platform(s)
The policy governs the use of any third-party AI tools. This could be ChatGPT, DALL-E. Google Gemini, etc. For example. Asher Security’s designated standard AI platform is Google Gemini. The company holds license agreements to ensure Asher Security’s data ownership, confidentiality, and stewardship requirements are contractually protected.
Employees may use the approved platform to ask questions and seek efficiency gains but only under the terms of the licensed agreement.
What this means in practice: use the designated, licensed platform for business work; licensed terms preserve the company’s rights over data and prevent the platform from absorbing that data into its public training sets.
- Personal vs Business Use
AI may be used for personal purposes are permitted by the company’s Acceptable Use Policy, but any business-related usage must only occur on approved platforms under this AI Policy.
What this means in practice: personal experimentation on public tools is separate from business work; do not mix or upload company/client data in non-approved environments.
- Restricted Data and CEO Review
No data classified as “Restricted” may be uploaded into the approved AI platform (Google Gemini) without explicit CEO review and approval. This is because the company handles client sensitive data, any restricted information that has a legitimate business case for AI use must be escalated to the CEO for evaluation.
What this means in practice: tie AI permissions to your data classification: if data is sensitive, you cannot drop it into AI without a documented exception approved at the CEO level.
- Prohibited Platforms & Generative AI Approvals
No other AI platforms may be used for business purposes or with the company’s data.
Any proposed generative AI business case (i.e., requests to use a non-standard or additional AI platform) must be reviewed by the CEO so the organization can assign proper validation, responsibilities, and risk controls.
What this means in practice: the company maintains a whitelist-only posture; new platforms are allowed only through a documented approval and vendor risk review.
- Request & Approval Process
Requests to use non-standard AI tools must be submitted to the CEO, and must include a detailed business case: who needs access, why an approved AI doesn’t meet the need, and what the platform will accomplish.
If the CEO approves, the company will conduct a vendor risk review and contract evaluation before granting access. All approved AI vendors must meet these conditions:
- protect company data
- not learn from the company’s intellectual property for the benefit of the platform
- allow the company to remain the data owner
- the ability to remove and delete data upon request
What this means in practice: the requester must provide justification and acceptance of responsibility; vendor reviews and contractual terms are gatekeepers to safe usage.
- Training Requirement
Employees who will use AI as a business enablement tool must complete assigned AI training. This ensures users understand acceptable use, data limits, and how to operate within the policy guardrails.
Looking for the right security training for your organisation?
Schedule a meeting to discuss about effective training with our expert
3. Discipline
The policy notes that enforcement is sensitive and should be coordinated with HR and other risk stakeholders. It recommends annual review of the discipline section by business risk stakeholders and HR to ensure alignment with other company policies. The emphasis is on transferring accountability for misuse back to the user while recognizing enforcement should be fair, transparent, and consistent.
If you’re in the position of many we work with, it might be too much for you to edit his and make it uniquely yours. It might be too much for you to ever read it. If that you, stretched already by the responsibilities of your role, we invite you to reach out. That’s what we do – we help technologies leaders be successful by supporting your clarity, focus, and objectives to reduce cybersecurity risks.
4. Approvals & Revision History
The policy template includes designated spaces for CEO approval and a revision history. Keep an auditable trail of approvals, requests, vendor reviews, and training completion records so that the company can demonstrate consistent enforcement and governance.
Approvals example:
Chief Executive Offer: _________________________________________________________________
Date: ___________________
Revision History example:
Effective Date |
Version No. |
Revised By |
Reason |
July 1, 2025 |
1.0 |
Tony Asher |
Created policy |
July 30, 2025 |
1.1 |
Tony Asher |
Added training requirement |
Asher Security: AI Security Policy Template
Email us to get our free AI Policy template here:
By leaning into emerging trends and leveraging the capabilities of generative AI, organizations can guide its growth in ways that are responsible, transparent, and value-driven. Taking a proactive stance on AI policy and governance not only mitigates risks but also ensures that AI evolves as a force for good—unlocking its full potential while helping us navigate the complexities of an AI-powered future.
Stay Ahead Of the Curve
Enjoyed our blog?
Join our Cyber Collective to receive executive-ready insights, real-world case studies, and strategies designed to strengthen governance, compliance, and resilience in your organization.
Recent Comments