AI for Grantmakers: Opportunities and Risks
Where AI accelerates grantmaking and where human oversight remains essential. A practical guide for UK funders navigating due diligence, bias and governance.
AI is already reshaping how grants are made in the UK, whether funders have chosen to adopt it or not. On the applicant side, the 2025 Charity Digital Skills Report found that 76% of charities are now using AI tools, up from 61% the previous year, with many using generative AI to draft funding applications. Some foundations have reported application volumes rising by 50-60%, and in extreme cases by 100-400%, partly driven by AI lowering the barrier to apply (ACF, Foundations in Focus 2025). On the funder side, the Technology Association of Grantmakers' 2024 survey of over 350 grantmaking organisations found that 81% report some degree of AI usage, though enterprise-wide adoption stands at just 4%.
This creates an urgent question for grantmakers: how do you capture the genuine efficiency gains that AI offers, from faster due diligence to automated report drafting, while managing real risks around bias, data protection and the erosion of relational grantmaking?
This guide sets out where AI delivers the most value for funders, where the risks are highest, and what governance and safeguards look like in practice. It draws on published guidance from the Charity Commission, IVAR, ACF and the ICO, and references real-world implementations from UK foundations.
Where does AI add the most value for grantmakers?
The strongest use cases for AI in grantmaking are repetitive, text-heavy tasks where consistency and speed matter more than nuanced judgement. Three areas stand out.
Due diligence and verification. AI can check charity registration status against the Charity Commission and Companies House, download and read annual accounts to extract financial metrics, scan uploaded policies for required clauses (safeguarding, conflicts of interest, whistleblowing) and flag documents that are missing or out of date. Tasks that previously took an assessor 2-3 hours per application can be reduced to 20-30 minutes of human review time.
Application triage and assessment support. Rather than reading every word of every application, assessors can use AI to summarise long narrative answers, extract evidence against specific assessment criteria and highlight where applications meet or fall short of eligibility requirements. This does not replace panel discussion, but it significantly reduces the cognitive load of reading hundreds of applications per funding round.
Report generation and monitoring. AI can compare progress reports against original application commitments, extract outcome data from narrative updates and draft impact reports for trustees, donors or stakeholders. According to ACF's Foundations in Focus 2025 report, UK foundations distributed a record £8.24 billion in grants in 2023-24, making efficient reporting across large portfolios increasingly important.
| Task | Without AI | With AI assistance |
|---|---|---|
| Due diligence per application | 2-3 hours | 20-30 minutes |
| Assessors needed per fund | 3-5 | 1 (with AI pre-screening) |
| Applicant feedback | Generic or none | Personalised, specific |
| Impact reporting | Annually, manual | On demand, automated drafts |
| Policy document review | Manual reading | Automated clause detection |
What are the main risks of using AI in grantmaking?
The risks fall into four categories, each requiring different mitigation strategies.
Algorithmic bias. AI models learn from historical data, which may reflect existing patterns of who has traditionally received funding. If a model is trained on past successful applications, it may systematically favour organisations that match historical profiles, potentially disadvantaging grassroots groups, organisations led by people from marginalised communities, or those working in areas that have historically been underfunded. A 2024 CDEI review into algorithmic decision-making found that bias in training data is the most common source of unfair outcomes in automated systems (GOV.UK, Review into Bias in Algorithmic Decision-Making).
Loss of relational judgement. IVAR has warned that AI-generated application text no longer necessarily represents what a person or organisation knows or can deliver, undermining funders' ability to assess values alignment, passion and delivery capacity. Over-reliance on AI summaries risks reducing grantmaking to a box-ticking exercise rather than the relational practice that trust-based philanthropy advocates.
Data protection. Under UK GDPR, Article 22 gives individuals the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects. Grant funding decisions can fall within this scope. The ICO's guidance on AI and data protection requires organisations to conduct Data Protection Impact Assessments (DPIAs) for AI applications that pose a high risk to individuals' rights and freedoms.
Opacity and explainability. Many AI systems operate as black boxes. If an AI flags an application as high-risk or low-scoring, both assessors and applicants need to understand why. Without explainability, there is no meaningful accountability.
How should grantmakers approach AI bias?
Bias is not a reason to avoid AI altogether, but it does require active and ongoing mitigation. Practical steps include the following.
Audit training data and outputs regularly. Before deploying any AI tool in assessment, examine what data it was trained on and whether that data reflects the diversity of organisations you want to fund. After deployment, monitor outcomes by geography, organisation size, beneficiary demographics and protected characteristics to detect patterns of disadvantage.
Use AI for evidence extraction, not scoring. The safest configuration is to use AI to surface relevant information from applications (for example, extracting where an applicant describes their safeguarding approach) rather than to assign quality scores. This keeps judgement with human assessors while still saving time.
Involve diverse reviewers. AI can reduce but not eliminate the number of human reviewers needed. Panels should still include people with lived experience of the communities being served. The Technology Association of Grantmakers' responsible AI framework recommends that AI outputs are always reviewed by at least one person with contextual knowledge of the applicant community.
Be transparent about limitations. If AI is used at any stage of the assessment process, disclose this to applicants and explain what role it plays. Transparency builds trust even when the technology is imperfect.
What does the Charity Commission say about AI?
The Charity Commission published guidance on charities and AI in April 2024. Its position is clear: trustees remain responsible for decision-making, and this responsibility cannot be delegated to AI or based on AI-generated content alone.
The Commission does not prohibit AI use. Instead, it expects charities and funders to ensure that human oversight is in place to prevent material errors, and it recognises that the human touch is key to how many charities operate and interact with beneficiaries. The Commission has stated it does not currently anticipate producing standalone AI guidance, instead encouraging trustees to apply existing governance principles to new technologies.
For grantmakers, this means three things in practice:
Document your AI policy. Record what AI tools are used, where in the process they operate, what decisions they inform, and how human oversight is maintained. The Charity Digital Skills Report 2025 found that the proportion of charities with an AI policy has tripled in a year, from 16% to 48%.
Maintain audit trails. Every AI-generated output (a due diligence summary, an assessment recommendation, a draft feedback letter) should be logged alongside the human decision that followed. If a trustee board is challenged on a funding decision, the audit trail should show that a person reviewed the evidence and applied judgement.
Review regularly. AI tools change rapidly. Commit to reviewing your AI use at least annually, assessing accuracy, fairness and whether the tools still align with your charitable purposes.
What does UK GDPR require for AI-assisted grantmaking?
UK GDPR creates specific obligations when AI is used in decisions about individuals or organisations. The three most relevant provisions are as follows.
Article 22: automated decision-making. Individuals have the right not to be subject to purely automated decisions that significantly affect them. In grantmaking, this means AI should inform but not make funding decisions. A human must review and approve or reject every application, not simply rubber-stamp an AI recommendation.
Data Protection Impact Assessments. The ICO requires DPIAs for processing that is likely to result in high risk to individuals. If your AI system processes personal data from applications, for example beneficiary demographics, financial information about trustees, or safeguarding disclosures, a DPIA is advisable. This should assess the necessity and proportionality of the processing, the risks to data subjects and the measures in place to mitigate those risks.
Transparency and explainability. Applicants have the right to understand how their data is being processed. Your privacy notice should explain that AI tools are used, what they do and how human oversight is maintained. If an applicant asks why their application was unsuccessful, you should be able to provide reasons that go beyond "the AI scored it low."
Data minimisation and storage. Only process the data you need for the assessment, store it securely, and delete it when it is no longer required. If you use third-party AI services, ensure your data processing agreement prevents the provider from using applicant data to train its own models.
How should funders handle AI-generated applications?
The rise in AI-generated applications presents a distinct challenge. IVAR's research highlights that funders can no longer reliably distinguish between an organisation's authentic voice and machine-generated text. Well-resourced organisations may gain a disproportionate advantage through access to paid AI tools and specialist prompt-engineering skills, potentially widening rather than narrowing existing inequities.
Several approaches are emerging across the UK funding sector.
Simplify application forms. Rather than trying to detect AI-generated text (which is increasingly unreliable), reduce the emphasis on polished prose. Ask for specific, verifiable information: budget breakdowns, named delivery partners, measurable outcomes, referees. AI can polish language but it cannot fabricate a partnership agreement or invent a track record that can be independently verified.
Shift weight to due diligence. If applications become easier to write, the due diligence stage becomes more important. Cross-referencing application claims against Charity Commission data, published accounts and direct conversations with applicants provides a more reliable signal than narrative quality alone.
Use conversation-based assessment. Some funders are supplementing written applications with phone or video conversations, which are harder to script with AI and reveal more about an organisation's capacity and culture. This aligns with trust-based philanthropy principles that prioritise relationships over paperwork.
Be explicit about your position. Publish a clear statement on whether AI-generated applications are acceptable, and if so, whether any disclosure is expected. Ambiguity creates anxiety for applicants who are unsure whether using AI will count against them.
What does a responsible AI policy look like for funders?
A practical AI policy for a grantmaking organisation does not need to be lengthy, but it should cover five areas.
Scope. Which AI tools are approved for use, by whom, and at which stages of the grantmaking process? This prevents ad hoc adoption where individual staff members use consumer AI tools without oversight.
Human oversight. Define the decision points where human review is mandatory. At minimum, this should include eligibility decisions, funding recommendations to panels, applicant feedback and any communication that goes to applicants or grantees.
Data handling. Specify how applicant data flows through AI systems. Where is it processed? Is it sent to external APIs? Is it used to train models? What encryption and access controls are in place? Ensure compliance with UK GDPR and your own data retention policies.
Bias monitoring. Commit to reviewing AI-assisted outcomes at regular intervals (quarterly or per funding round) to check for patterns of disadvantage by geography, organisation size, sector or demographic characteristics.
Transparency. State publicly that AI is used in your grantmaking process, explain what it does and what it does not do, and provide an avenue for applicants to query or appeal decisions. The ACF blog on foundations and AI notes that openness about AI use builds trust across the sector, even when the technology is imperfect.
How are UK funders using AI in practice today?
Adoption varies significantly across the sector. Community foundations and larger trusts have been among the earliest adopters, while smaller family foundations tend to rely more on manual processes.
Due diligence automation. Several UK community foundations now use AI to automatically verify charity registration, download and analyse annual accounts, scan uploaded policy documents and flag reputational risks from news sources. This has reduced due diligence time from hours to minutes per application while improving consistency across assessors.
Assessment support. AI reads applications and extracts evidence against specific assessment criteria, presenting it in a structured format for human reviewers. This means assessors start with the relevant information already highlighted rather than reading every application from scratch. Some foundations report needing only one assessor per fund instead of three to five, freeing staff to focus on relationship-building and strategic work.
Feedback generation. One of the most impactful uses is generating personalised, constructive feedback for every applicant, including unsuccessful ones. Historically, many funders could only afford to send generic rejection letters. AI-drafted feedback, reviewed and edited by a grants officer, helps applicants improve future bids and strengthens the funder-applicant relationship. This aligns with guidance from how to give better feedback to applicants.
Monitoring and reporting. Tools like Plinth compare progress reports against original application commitments, automatically highlighting where delivery is on track, ahead or behind. Impact reports can be generated on demand rather than compiled manually once a year. Plinth's AI reads narrative monitoring reports to extract outcomes, beneficiary data and delivery status, then generates tailored reports for different stakeholders: trustees, donors or policymakers.
What should funders consider before adopting AI?
Before investing in AI tools, grantmakers should work through a set of practical questions.
Start with the problem, not the technology. Identify where your team spends the most time on low-judgement, high-volume tasks. Due diligence, application summarisation and report drafting are common starting points. Avoid implementing AI in areas where relational judgement is the primary value, such as building transparency into decisions about strategic priorities.
Assess your data readiness. AI tools work best when they have structured, consistent data to work with. If your current process relies on email attachments, Word documents and spreadsheets with inconsistent formats, you may need to address data infrastructure before AI can add value. Moving from spreadsheets to a system is often a prerequisite.
Choose tools with audit trails. Any AI tool used in grantmaking should log its inputs, outputs and the human decisions that follow. This is essential for regulatory compliance, board accountability and continuous improvement. Black-box systems that provide recommendations without explanations are not suitable for funding decisions.
Plan for ongoing costs. AI tools have running costs beyond the initial licence fee, including staff training, policy development, bias monitoring and periodic review. Factor these into your budget alongside the time savings.
Start small and evaluate. Run AI-assisted assessment alongside your existing process for one funding round before switching entirely. Compare outcomes, check for bias and gather feedback from both staff and applicants. Tools like Plinth offer flexible deployment options, from a standalone assessment portal to a complete grant management system, allowing funders to start with due diligence and expand as confidence grows.
FAQs
Will AI introduce bias into our grantmaking?
AI can introduce or amplify bias if it learns from historically unrepresentative data. Mitigation requires auditing training data, monitoring outcomes across demographic and geographic dimensions, and keeping final decisions with diverse human panels. Using AI for information extraction rather than scoring reduces the risk significantly.
Is applicant data safe when using AI tools?
Data safety depends on the specific tool and configuration. Key questions to ask your provider: is data processed within the UK or EU? Is it used to train the provider's models? Is it encrypted at rest and in transit? Plinth, for example, hosts data on Google Cloud Platform in Europe, encrypts all data and does not use applicant information to train shared AI models.
Can AI replace grants officers?
No. AI reduces administrative burden so that grants officers can focus on relationship-building, strategic assessment and support. The Charity Commission is clear that decision-making responsibility must remain with people. AI is best understood as a tool that handles the repetitive work, not a replacement for professional judgement.
Do we need a DPIA to use AI in grantmaking?
If your AI system processes personal data and the processing is likely to result in high risk to individuals, the ICO recommends conducting a Data Protection Impact Assessment. In practice, most AI-assisted grantmaking systems that handle applicant personal data should have a DPIA in place. The assessment should cover data flows, risks and mitigation measures.
Should we tell applicants we use AI?
Yes. Transparency is both a legal expectation under UK GDPR and a trust-building practice. Your privacy notice and application guidance should explain that AI tools are used, what role they play, and that human reviewers make all funding decisions. Applicants are more likely to trust the process when they understand how it works.
How do we detect AI-generated applications?
Reliable detection of AI-generated text is increasingly difficult. Rather than investing in detection tools, focus on verification: cross-reference claims against public records, request specific evidence that cannot be easily fabricated, and supplement written applications with conversations where appropriate.
What AI governance framework should we follow?
The Technology Association of Grantmakers (TAG) published a responsible AI adoption framework in 2024 that covers governance, transparency and monitoring. In the UK, the ICO's guidance on AI and data protection provides the regulatory baseline. The Charity Commission expects trustees to apply existing governance principles to AI as they would to any new technology.
How much time can AI save in grantmaking?
Time savings vary by task and volume. Foundations using AI-assisted due diligence typically report reducing assessment time from 2-3 hours to 20-30 minutes per application. Feedback generation, which previously was not done at all for most applicants, can now be delivered at scale. Impact reporting moves from a multi-day manual exercise to an on-demand process.
Recommended next pages
- Human-in-the-Loop Grantmaking: Why It Matters — How to balance AI automation with ethical oversight and human judgement
- How to Automate Due Diligence in Grantmaking — Practical workflows for streamlining grantee verification checks
- Ethical Considerations in AI Grantmaking — A deeper look at fairness, accountability and transparency
- Data Security in AI Grant Systems — Technical safeguards for protecting applicant data
- AI and Accessibility in Grantmaking — Ensuring AI tools do not create new barriers for applicants
Last updated: February 2026