AI for Grantmakers: Opportunities and Risks
Where AI accelerates grantmaking—and where cautious, human oversight remains essential.
AI for Grantmakers: Opportunities and Risks
AI speeds up grantmaking by summarising, checking and drafting, but judgement and accountability must remain with people.
- Best used for due diligence, triage and report drafting.
- Requires transparent policies and audit trails.
- Works safely with a human‑in‑the‑loop approach as in Plinth.
High‑value opportunities
Target repetitive, text‑heavy tasks for immediate gains.
- Eligibility screening and anomaly detection.
- Summaries for long answers and reviewer support.
- Draft feedback and impact reports for editing.
Key takeaway: focus AI on speed and consistency, not final decisions.
Managing risks responsibly
Address bias, privacy and explainability from the outset.
- Keep humans in control of approvals and rejections.
- Log prompts, outputs and edits for audit.
- Store data securely; do not train third‑party models on sensitive data.
Key takeaway: policy and controls matter as much as the model.
Governance and transparency
Tell applicants when AI is used and how decisions are made.
- Public statements on scope, safeguards and appeals.
- Regular review of outputs and error rates.
- Proportionate DPIAs for higher‑risk features.
Key takeaway: openness builds trust across the sector.
FAQs
Will AI introduce bias?
Bias is possible; mitigation includes diverse training data, monitoring and human review.
Is data safe?
Plinth encrypts data and does not use your information to train shared models.
Can AI replace case officers?
No. It reduces admin so officers focus on support and strategy.