AI governance and policy: what at minimum belongs in an AI policy?
An AI policy isn't a disclaimer, it's a working document that sets scope, roles and escalation. What needs to be in there at minimum, and the questions you can't forget.
Policy exists to make decisions lighter
A good AI policy makes every individual decision simpler, not more complex. It describes which AI use is allowed by default, which isn't, and who decides when there's doubt. If every question still has to be discussed in an AI steering committee, you don't have a policy, you have a meeting.
Law (GDPR, AI Act, sector regulations) covers a portion, but most of the work is in organisation-specific choices: which data do you share with which vendor, who signs the contract, who reports incidents, and how do you account for it to the auditor or board.
What at minimum belongs in an AI policy
A workable policy document doesn't need more than ten to fifteen pages. It has to be clear to a random employee, not an unreadable compromise between legal, CIO and CFO. What it needs to cover:
- Scope: which AI applications are covered (generative tools, agentic platforms, embedded copilots in SaaS).
- Permitted and forbidden data categories, with explicit statements on personal identifiers, medical data, salary and commercially sensitive info.
- Roles: who owns AI policy, who's CISO, who's DPO, who's process owner.
- Procurement: which checks upfront, which contract clauses are mandatory (DPA, data residency, no-training), who signs.
- Acceptance criteria for production: audit trail, observability, human-in-the-loop rules.
- Incident and escalation process, including who's informed when.
- Accountability: what sits in the annual review, who receives reporting.
Roles: not one executive, and not six
A common mistake is putting AI with one person (CIO or CFO) or splitting it across a steering committee of six. Better is a triangle: CISO for security and data, DPO for privacy and compliance, and an operational AI owner (often a board member with digital portfolio) who can decide. Process owners stay accountable for their specific application, not for the policy itself.
Alignment with GDPR and the AI Act
For GDPR you align with your existing DPIA process: every new AI application gets a short impact assessment, with retention periods, processing purposes and legal bases. For the AI Act you determine per application which risk class it falls in (minimal, limited, high or prohibited) and what that means for documentation, transparency and human oversight.
Incidents and learning
A policy without a clear incident process is a document without users. Define what counts as an incident (a hallucination that went out, a data leak through a prompt, an unexpected model output that had financial impact), who reports it, how quickly, and how the lessons feed back into policy and training. A quarterly report to the board is enough for most organisations.
How to start without drowning in meetings
Begin with a five-page draft, a half-day writing session between DPO, CISO and the AI owner. Stress-test that draft against three concrete cases live in your organisation. The gaps that surface lead to the real version. Plan a Quick Scan if you'd like to spar about which frames take the most work off your plate in your situation.
Keep reading
More insights
Curious what an AI coworker can do for your process?
Book a no-strings Quick Scan and explore the options.
Book a Quick Scan