Frameworks
Three frameworks for better AI decisions
Use these frameworks in order: identify a worthwhile use case, judge the operational risk, and decide what data can safely be involved.
Recommended first stop
Choose one safe, worthwhile task before you try tools
When you finish this page, you should know which task is worth testing first and what data boundaries need to stay in place.
Step 1
Choose a use case
Use the Opportunity Filter to find work that is worth improving first.
Step 2
Check operational risk
Use the risk levels to judge how much the AI touches systems, staff, or external audiences.
Step 3
Check data sensitivity
Use the data framework and privacy reference to decide what information can safely be involved.
Opportunity Filter
Use this first to decide whether a workflow is a strong candidate for AI at all. It helps teams focus on work that is repetitive, delegable, or consistently being missed.
Look for work that follows the same pattern every day or every week.
What tasks are repeated so often that they should be systematized?
If you would hand it to a capable intern, it is often a great AI candidate.
What process could be delegated if someone had clear instructions?
Some valuable tasks never happen because there is no time or bandwidth.
What important work keeps slipping through because your team is overloaded?
Operational Risk Levels
This framework answers one question: what does the AI do, and who could be affected by it? Move up the levels as systems become more autonomous or more exposed to staff, donors, clients, or the public.
Operational risk is separate from data sensitivity. A Level 1 use case can still be a bad idea if people paste confidential or regulated information into the tool.
Use Framework 3 below to check the data involved, even when the operational risk is low.
AI assists personal productivity without direct system access. Operational risk is very low, but data risk still exists if sensitive information is entered.
Examples: Drafting emails, Brainstorming ideas, Analyzing data you paste in
AI processes incoming information and returns condensed output.
Examples: Summarizing inbound requests, Prioritizing messages, Flagging important items
AI communicates with or automates tasks for your internal team.
Examples: Internal chatbots, Compliance reminders to staff, Automated internal task routing
AI directly interacts with donors, clients, families, or the public.
Examples: AI phone calls, Automated donor emails, Public-facing community chatbots
- Start at Level 1 and build confidence before moving up.
- Aim for lowest risk and maximum operational value.
- Treat data sensitivity as a separate risk lens at every level.
- Very low operational risk does not mean sensitive data is safe to paste.
- Release slowly when systems touch end users.
- Protect your most critical relationships with extra caution.
Data Sensitivity Check
This framework answers a different question: what information is entering the tool? The more sensitive the data, the more careful you should be about both the tool and the deployment setup.
Information that is already published or intended for broad external sharing.
Use with AI: Lowest data sensitivity. You still need normal review for accuracy and tone, but privacy concerns are limited.
Examples: Published website copy, Public reports, Approved marketing language
Routine internal information that is not public, but would not cause major harm if disclosed.
Use with AI: Use approved tools and avoid unnecessary detail. Good fit for meeting notes, draft plans, and general operations work.
Examples: Internal notes, Draft agendas, Process documentation
Sensitive organizational information that could damage trust, finances, or operations if exposed.
Use with AI: Use stronger privacy settings or approved plans only, and minimize what you share. Include names or details only when truly necessary.
Examples: Donor records, Financial data, HR issues, Board materials
Protected client, case, legal, health, or other regulated information that requires the highest caution.
Use with AI: Do not use general-purpose tools by default. Only proceed with explicit approval, the right controls, and a clear compliance basis.
Examples: Case files, Protected health data, Student records, Legal matters
Privacy & Deployment Options
This is a supporting reference, not a separate framework. Use it after the data sensitivity check to decide what type of tool plan or deployment environment fits the information involved.
| Level | What It Means | Examples |
|---|---|---|
| Free tier | Data may be used to improve models. Avoid putting in sensitive information. | Claude free, ChatGPT free, Gemini free |
| Paid / Pro plans | Better privacy protections. You can opt out of data usage for model training. Data may still be used for other purposes like safety and moderation. | Claude Pro, ChatGPT Plus, Gemini Advanced |
| Enterprise plans | Zero data retention options, stronger compliance controls, and admin features. | Claude Enterprise, ChatGPT Enterprise |
| Self-hosted (cloud) | Models run in your cloud environment so data does not reach the model provider directly. | AWS Bedrock, Azure OpenAI, Google Cloud Vertex AI |
| Local / On-device | Models run on your own machine. Highest privacy but usually slower performance. | LM Studio, Ollama |
Agentic AI: What's coming next
This is future-facing context so leaders can see where adoption is headed without confusing it with today's core decision frameworks.
- Agentic AI can handle multi-step workflows end to end, not just answer questions.
- AI phone systems that collect information and route calls are early examples.
- We will see coordinated teams of AI agents for complex operations and communications.
- You do not need to implement agentic systems right now; awareness is enough for better strategy.
Recommended next
Try one prompt for the task you chose here
Keep the first test simple and use a non-sensitive example so you can learn the workflow safely.