Practical Kit Preview

Acceptable Use Starter

A plain-language internal starting point for staff guardrails, sensitive data boundaries, and review expectations.

Word (.docx)

Review the content here, then download the editable Word version if you want to adapt it for your organization.

This is a simple starting point, not legal advice. Adapt it to your organization's context and risk level.

Purpose

AI tools may be used to support staff productivity, drafting, summarizing, brainstorming, and low-risk internal work when they align with mission, privacy, and quality expectations.

Approved Early Uses

  • Drafting internal notes, outlines, and first-pass communications
  • Summarizing meetings or long documents that do not contain restricted data
  • Brainstorming options for planning, operations, and content
  • Creating first drafts that will be reviewed by staff before use

Prohibited Data

Do not enter the following into public AI tools unless explicitly approved and protected by the right environment:

  • Donor personal information
  • Client or participant information
  • Protected health information
  • Personnel records
  • Legal or highly confidential organizational matters

Human Review

  • Staff remain responsible for checking accuracy.
  • AI-generated content must be reviewed before being shared externally.
  • High-stakes decisions may not be made solely by AI output.

Disclosure And Approval

  • Staff should tell a manager before testing a new tool for recurring workflow use.
  • Public-facing or donor-facing uses require manager approval before launch.
  • Any use involving sensitive data requires leadership review first.

Escalation

If a staff member is unsure whether a task or data type is appropriate, the answer is "pause and ask."