Use this to decide whether your organization is ready for a broader AI rollout or should stay in limited experimentation.
Leadership
- Do we have one leader who owns AI decisions and can say yes, no, or not yet?
- Have we identified one business problem worth testing instead of chasing many ideas at once?
- Do we know what success would look like for a first pilot?
Staff And Workflow
- Which tasks are repetitive, intern-level, or consistently not getting done?
- Which staff members are already experimenting informally?
- Where would AI save time without removing necessary human judgment?
Data
- Have we named the kinds of data staff should never paste into public AI tools?
- Do we know which workflows involve donor, client, health, HR, or legal information?
- Do we understand whether our current tool plan has free, paid, or enterprise privacy protections?
Governance
- Do we have a short acceptable-use policy or draft?
- Is human review required before anything reaches donors, clients, or the public?
- Do staff know when they need permission before trying a new workflow?
Pilot Readiness
- Is the first use case low operational risk?
- Can we test it in a way that does not affect public trust if it fails?
- Can we measure time saved, quality improved, or backlog reduced?
Recommendation
If you answered "no" to several governance and data questions, stay with low-risk experimentation first. If most answers are "yes," choose one pilot and test it carefully for 30 days.