Use this guide to introduce AI in a way that is practical, calm, and trust-aware.
Suggested Opening
AI is not one decision. It is a category of tools and workflows that can help our team with drafting, summarizing, research, and internal process support. We should approach it the way we approach any new operational capability: begin with low-risk uses, protect sensitive data, and keep human review where trust matters most.
What To Emphasize
- We are not proposing unrestricted use.
- We are starting with low-risk internal work.
- We will define what data cannot be entered.
- We will require human review for important outputs.
- We will learn from one pilot before expanding.
Questions The Board May Ask
What problem are we trying to solve?
Name one operational pain point, such as staff capacity, delayed follow-up, reporting burden, or repetitive communication tasks.
What are the biggest risks?
Privacy, accuracy, bias, over-reliance, and reputational risk if AI is used externally without review.
How will we control those risks?
Start small, prohibit sensitive data in public tools, require human review, and choose a pilot that does not directly affect donors, clients, or the public.
How will we know if it is working?
Measure time saved, backlog reduced, faster turnaround, better consistency, or improved staff capacity.
Recommended First Board Decision
Approve a 30-day low-risk pilot plus a short acceptable-use policy draft, then review results before any broader rollout.