How to build an AI governance policy your team will actually follow

roger_sartain
By
Roger Sartain
Roger Sartain is a senior executive, strategist, and contributor at Mindset with degrees in Electrical Engineering and Business Administration. He writes about leadership, organizational design, and...

A founder I know woke up to a Slack message from her head of product: “Just so you know, half the engineering team has been using Claude to write code reviews for the past three months.” She wasn’t upset about the AI use — she was upset that nobody had told her, that there were no guidelines about what data could be fed into these tools, and that the company’s client NDA language almost certainly hadn’t anticipated employees pasting proprietary code into third-party AI systems. The gap between AI adoption and AI governance had grown silently, and by the time leadership noticed, the risk was already baked in.

This playbook walks through how to build an AI governance policy that your team will actually follow — one that channels AI use into productive, safe patterns rather than trying to lock it down entirely. The goal is a living document that keeps pace with how your team actually works, not a legal artifact that sits in a shared drive collecting dust.

We drew on the Partnership on AI’s six governance priorities for 2026, SHRM’s generative AI policy guidance, and the practical frameworks emerging from companies that have navigated this transition successfully. The approach here is designed for teams of 10-200 people — large enough to need structure, small enough that you can implement it without a dedicated compliance department.

Why most AI policies fail before anyone reads them

The typical AI policy gets written by legal counsel, approved by the C-suite, distributed via email, and immediately forgotten. This happens because the policy addresses the company’s liability concerns rather than the employee’s daily workflow. When someone on your team is trying to draft a client proposal and knows that AI could cut the work from four hours to forty-five minutes, a twelve-page policy document is an obstacle, not a guide.

The failure pattern is predictable. Legal writes a policy that’s comprehensive but impractical. Employees find it too restrictive or too vague to apply to their actual work. They start using AI tools anyway — quietly, on personal accounts, outside the company’s visibility. This is what researchers now call “shadow AI,” and it’s far more dangerous than sanctioned AI use because it happens without guardrails, without data protections, and without anyone tracking what’s being fed into these systems.

According to AIHR’s analysis of AI policy implementation, the policies that actually stick share three characteristics: they’re short enough to read in one sitting, they provide clear yes/no guidance for the most common use cases, and they’re updated at least quarterly as tools and capabilities change.

The three layers of an effective AI governance policy

Rather than building one monolithic document, the most effective approach structures AI governance into three layers that serve different audiences and update at different speeds.

Layer 1: The principles (changes annually)

This is the foundation — five to seven statements that express your organization’s values around AI use. These should be broad enough to survive rapid tool changes but specific enough to guide decision-making. For example: “We use AI to augment human judgment, not replace it in decisions that affect people’s employment, compensation, or career trajectory.” A principle like that immediately clarifies a huge category of use cases without needing to list every specific tool or scenario.

Keep this layer to one page. It should be something a new hire can internalize in their first week, and something a manager can reference when making a judgment call about a novel situation. If your principles document requires a table of contents, it’s too long.

Layer 2: The use-case matrix (changes quarterly)

This is where the practical value lives. Build a simple matrix that categorizes AI use into three buckets: encouraged, permitted with guidelines, and prohibited. Map your team’s actual workflows onto this matrix so people can find their specific situation quickly.

Encouraged uses might include: drafting initial versions of internal documents, brainstorming and ideation, summarizing meeting notes, researching publicly available information, generating code for internal tools with human review.

Permitted with guidelines might include: drafting client-facing communications (requires human review and editing before sending), analyzing anonymized data sets, creating marketing copy (requires brand voice review), using AI for feedback preparation or coaching conversation prep.

Prohibited uses should be concrete and non-negotiable: inputting client proprietary data or PII into any AI tool, using AI to make final hiring or firing decisions, generating content and representing it as original research, using personal AI accounts for company work where data retention policies can’t be enforced.

The matrix works because it respects how people actually make decisions under time pressure. Nobody is going to parse a twelve-page document when they’re trying to finish a deliverable. A three-column matrix gives them an answer in seconds.

Layer 3: The operational playbook (changes monthly)

This layer covers the tactical details — which specific tools are approved, how to configure them securely, what the data handling procedures are, and who to contact when something doesn’t fit neatly into the matrix. This is the document that your IT team maintains and updates as the tool landscape shifts.

Separating this from the principles and the matrix means you can update tool-specific guidance without reopening the entire policy discussion. When a new AI tool launches or an existing one changes its data practices, you update layer three without touching layers one and two.

Building the governance team without adding headcount

For teams under 200 people, a dedicated AI governance role is usually overkill. Instead, assemble a lightweight governance council — three to five people from different functions who meet monthly for 30 minutes. The composition should include someone from leadership (decision authority), someone from IT or engineering (technical reality check), someone from legal or compliance (risk awareness), and one or two people who are heavy AI users on the team (practical grounding).

The council’s job is narrow and specific: review the use-case matrix quarterly, address any edge cases that came up in the previous month, update the operational playbook as needed, and surface any emerging risks. This is not a committee that needs to approve every AI use — that approach guarantees shadow AI adoption. It’s a steering function that keeps the accountability structure current without becoming a bottleneck.

The monthly meeting should follow a standing agenda: what new AI uses have we observed on the team, do any of them fall outside our current matrix, what tool changes or updates do we need to address, and are there any incidents or near-misses to learn from. Keep minutes and share them with the full team — transparency about the governance process builds trust in the governance outcomes.

The data classification problem most policies ignore

The single biggest risk in workplace AI use is data leakage — feeding sensitive information into AI systems that store, learn from, or surface that data in ways you can’t control. Most policies address this by saying “don’t put sensitive data into AI tools,” which is about as useful as telling someone “don’t make mistakes.”

The practical solution is a simple data classification system that maps directly to your AI use-case matrix. Every type of data your team handles falls into one of three categories: open (can be used freely with AI tools), internal (can be used with approved enterprise AI tools that have appropriate data agreements), and restricted (never enters any AI system under any circumstances).

Open data includes publicly available information, general industry knowledge, and internal documents that contain no proprietary or personal information. Internal data includes strategy documents, financial projections, internal communications, and anonymized performance data. Restricted data includes client PII, proprietary algorithms or IP, employee health or compensation data, and anything covered by NDA or regulatory requirements.

Train your team on this classification system in a single 30-minute session. Give them a one-page reference card they can keep visible at their workstation. When someone is about to paste something into an AI tool, the question they should ask is simple: “What classification is this data?” If the answer is clear, the decision is clear.

Making compliance easy and violation recovery painless

The policies that survive contact with reality are the ones that make the right behavior easier than the wrong behavior. This means providing approved AI tools with enterprise-grade data agreements already in place, pre-configuring those tools with appropriate guardrails, and making the process of accessing sanctioned tools simpler than the process of signing up for a personal account.

On the violation side, build a culture of reporting rather than a culture of punishment. The first time someone inadvertently feeds restricted data into an AI tool, the response should be: document the incident, assess the actual risk, take any necessary remediation steps, and update the training or the matrix to prevent recurrence. If people fear punishment for honest mistakes, they’ll hide those mistakes — and hidden data leakage is exponentially more dangerous than reported data leakage.

Create a simple incident reporting channel — a dedicated Slack channel or email alias works fine. Make it clear that reporting an AI-related concern is always the right call, even if the person reporting is the one who made the mistake. The goal is a system where your team’s natural trust in the organization extends to AI governance, rather than a system where governance feels like surveillance.

Rolling out the policy without creating a backlash

The rollout matters as much as the content. A policy that appears to restrict freedom will generate resistance, even if the actual restrictions are reasonable. Frame the policy as an enablement tool — “here’s how to use AI confidently and safely” rather than “here’s what you can’t do with AI.”

Start with a team-wide session that takes no more than 45 minutes. Walk through the principles (5 minutes), demonstrate the use-case matrix with real examples from your team’s workflow (20 minutes), cover the data classification system (10 minutes), and leave time for questions (10 minutes). Record the session for anyone who can’t attend. Follow up with the one-page reference card and a link to the full operational playbook.

Then — and this is the step most companies skip — schedule a 30-day check-in. Ask the team: What situations came up that the policy didn’t clearly address? What felt overly restrictive? What felt too loose? Use the answers to update the use-case matrix. This feedback loop signals that the policy is a living document shaped by the team’s actual experience, not a mandate imposed from above.

The organizations that get AI governance right in 2026 will be the ones that treat it as an ongoing operating system rather than a one-time compliance exercise. The tools are changing too fast for any static policy to remain relevant. Build the structure that adapts, staff it with people who understand both the technology and the workflow, and make the default path the safe path. Everything else is just documentation.

Share This Article
Follow:
Roger Sartain is a senior executive, strategist, and contributor at Mindset with degrees in Electrical Engineering and Business Administration. He writes about leadership, organizational design, and the operational decisions that determine whether teams and businesses scale or stall.