AI-Driven vs. Traditional Grantmaking: Finding the Right Balance
How funders can combine AI efficiency with human judgement in grantmaking — covering triage, due diligence, assessment, and feedback without sacrificing trust.
The question facing UK funders is no longer whether to use AI in grantmaking, but where it helps and where it harms. UK charitable foundations distributed a record £8.2 billion in grants in 2023-24, a 12% increase on the previous year (UKGrantmaking, 2025). At the same time, application volumes are rising sharply — some funders report increases of 50-100% — driven in part by applicants using AI to write proposals faster. The old approach of reading every word of every application, discussing them at length in panel, and sending a two-line decision letter is becoming untenable at scale.
But speed is not the same as good judgement. A 2025 Candid survey found that 97% of foundations are not currently using AI to screen applicants or help decide whom to fund. Only 1% said they were. The caution is understandable: grant decisions affect real organisations and real communities, and the sector rightly demands fairness, transparency, and accountability. The risk of a poorly implemented AI system rejecting a strong application — or systematically disadvantaging certain types of organisation — is not hypothetical. A Nature investigation in 2025 reported on a Spanish foundation using AI to screen grants, prompting significant debate about trust and fairness.
The answer, as most experienced funders recognise, is not AI or humans. It is AI and humans, each doing what they do best. This guide examines where AI genuinely adds value in the grantmaking lifecycle, where human judgement remains essential, and how to implement a balanced approach that your applicants, board, and beneficiaries can trust.
What Does Traditional Grantmaking Actually Look Like?
Traditional grantmaking follows a broadly consistent pattern across most UK trusts and foundations. Applications arrive — often as Word documents or PDFs attached to emails — and programme officers read them, request missing information, and prepare summaries for a panel or board. The panel discusses applications, sometimes with scoring frameworks but often through unstructured conversation, and reaches decisions. Successful applicants receive a grant agreement; unsuccessful ones receive a brief rejection, sometimes with no feedback at all.
This process has real strengths. It allows experienced programme officers to exercise nuanced judgement, pick up on contextual factors that a checklist might miss, and build relationships with applicants. Trustees and panel members bring diverse perspectives and local knowledge. The deliberative nature of panel discussions can surface concerns that no individual reviewer would have identified alone.
But it also has well-documented weaknesses. Processing times vary enormously — community and local authority grants can be decided within a month, while national programmes routinely take three to six months from deadline to decision. The administrative burden is significant: programme officers at mid-sized trusts frequently report spending the majority of their time on tasks that could be automated, leaving less capacity for the relationship-building and strategic thinking that actually improve outcomes.
Inconsistency is another problem. Without structured scoring, panel discussions tend to favour articulate applicants and penalise smaller organisations that lack professional bid writers. IVAR's research on open and trusting grantmaking, which now involves over 150 funders making grants worth more than £1 billion, highlights that many charities experience the application process as burdensome and opaque, regardless of the funder's good intentions.
Where Does AI Add Genuine Value?
AI is not a single technology. It is a collection of capabilities — pattern recognition, natural language processing, document analysis, data extraction — that can be applied to specific tasks within the grantmaking lifecycle. The key is matching the right capability to the right task.
Application triage and eligibility screening. When a fund receives hundreds of applications, the first task is identifying which ones meet basic eligibility criteria: correct legal structure, operating in the right geography, requesting an amount within the fund's range. This is mechanical work that AI handles reliably, freeing programme officers to focus on substantive assessment.
Due diligence document review. Checking governance documents, safeguarding policies, accounts, and insurance certificates is essential but time-consuming. AI can read these documents, cross-reference them against charity register data from the Charity Commission or Companies House, and flag specific issues — an outdated safeguarding policy, a missing dissolution clause, accounts showing a significant deficit — in seconds rather than hours.
Data extraction and summarisation. Pulling key figures from applications, monitoring reports, and financial documents into structured formats allows funders to compare applications consistently and build portfolio-level insights.
Feedback drafting. Writing individual feedback for every applicant is one of the most valued but resource-intensive parts of grantmaking. AI can draft feedback based on assessment scores and application content, which a programme officer then reviews and personalises.
The Charity Digital Skills Report 2024 found that 61% of UK charities are now using AI tools in their day-to-day operations, up from 35% in 2023. Among funders specifically, ACF reported in 2024 that 18% of its members are already using AI in areas of their day-to-day work, with a further 40% exploring its possibilities.
Where Should Humans Stay in Control?
Not everything should be automated, and the sector is right to be cautious. The Center for Effective Philanthropy's 2025 report "AI With Purpose" found that almost two-thirds of nonprofits and foundations report that none or just a few of their staff have a solid understanding of AI and its applications. Deploying AI in areas where staff cannot meaningfully oversee it is a recipe for problems.
Final funding decisions. Whether to fund an organisation is a judgement call that weighs strategy, risk, relationships, and values. AI can inform that decision by surfacing relevant data, but it should not make it. Panel members and trustees bring contextual knowledge — about local needs, about an applicant's track record, about strategic priorities — that no model can replicate.
Relationship management. The best grantmaking is relational, not transactional. Building trust with grantees, understanding their challenges, and offering non-financial support requires human empathy and experience. IVAR's Open and Trusting initiative emphasises that good grantmaking "starts from the assumption that charities know their own business" — a principle that depends on genuine human engagement.
Ethical judgement and edge cases. Some applications sit in grey areas: an organisation that meets all the criteria but whose approach raises ethical questions; a first-time applicant with an unconventional model; a grantee whose monitoring report reveals unexpected challenges. These situations require the kind of contextual reasoning that AI consistently struggles with.
Communicating difficult decisions. Rejecting an application is a significant moment for the applicant. While AI can draft feedback, the decision about tone, the acknowledgement of effort, and the offer of constructive guidance should reflect human judgement and care.
AI-Driven vs. Traditional Grantmaking: A Practical Comparison
The following table maps specific grantmaking tasks to the approach that typically delivers the best results.
| Task | Traditional Approach | AI-Assisted Approach | Recommended |
|---|---|---|---|
| Eligibility screening | Programme officer reads each application manually | AI checks criteria automatically, flags exceptions | AI-assisted |
| Due diligence document review | Officer reads governance docs, accounts, policies | AI analyses documents, cross-references registry data, flags issues | AI-assisted with human review |
| Application scoring | Panel discussion, often unstructured | Structured scoring with AI-generated summaries to support reviewers | Hybrid |
| Final funding decision | Trustee or panel vote | Trustee or panel vote, informed by AI analysis | Human-led |
| Applicant feedback | Brief letter or no feedback | AI drafts personalised feedback from scores; officer reviews | AI-assisted with human review |
| Monitoring report review | Officer reads narrative reports | AI extracts data and flags concerns; officer follows up | AI-assisted |
| Portfolio analysis | Manual spreadsheet compilation | AI queries across all applications and reports | AI-assisted |
| Relationship management | Phone calls, site visits, ongoing dialogue | Not applicable | Human-led |
| Fraud and risk detection | Spot checks, intuition | AI flags anomalies in documents and financials | AI-assisted with human decision |
The pattern is clear: AI excels at processing, extraction, and consistency. Humans excel at judgement, relationships, and ethics. The strongest approach uses both.
How Do You Implement a Human-in-the-Loop Model?
A human-in-the-loop model means that AI performs specific tasks but a person reviews, adjusts, and approves the output before it affects applicants. This is not a compromise — it is the design pattern that most regulated industries, from medicine to financial services, have converged on for high-stakes decisions.
Step 1: Map your workflow. Identify every step in your grantmaking process, from the moment an application arrives to the final report. For each step, ask: is this primarily about processing information, or about exercising judgement? Processing steps are candidates for AI assistance.
Step 2: Define escalation rules. Not every application needs the same level of human review. A straightforward application to a small grants programme with clear eligibility criteria might need minimal human intervention beyond the final decision. A complex multi-year partnership requires deep engagement at every stage. Define thresholds based on grant size, complexity, and risk.
Step 3: Make AI outputs visible and explainable. When AI flags an issue in a due diligence document or suggests a score, the programme officer needs to see why. Opaque outputs — a score with no explanation — undermine trust and prevent meaningful oversight. The best systems present their reasoning alongside their conclusions.
Step 4: Build in feedback loops. Programme officers should be able to override AI suggestions and record why. Over time, this feedback improves the system and creates an audit trail that demonstrates human oversight to your board and to applicants.
Step 5: Communicate transparently. Tell applicants how you use technology in your process. The Blackbaud Institute's 2025 Status of UK Fundraising report found that 70% of charity sector participants feel positively about AI use — but that goodwill depends on transparency. If applicants discover you are using AI without having told them, trust erodes quickly.
What Are the Risks of Getting the Balance Wrong?
The risks of over-relying on AI in grantmaking are real and well-documented. Equally, the risks of ignoring AI entirely are growing as application volumes increase and sector expectations rise.
Risk: Algorithmic bias. AI systems learn from historical data. If your past funding patterns show a bias — towards larger organisations, towards certain geographies, towards applicants who use particular language — an AI system trained on that data will replicate and amplify those biases. The CEP's 2025 report found that 85% of nonprofits are not currently engaging in efforts to advance equitable AI, highlighting a significant gap in attention to this issue.
Risk: Deskilling. If programme officers stop reading applications because AI summaries are "good enough," the organisation loses the deep familiarity with its portfolio that enables good strategic decisions. AI should augment expertise, not replace it.
Risk: Applicant distrust. The Nature investigation into AI-assisted grant screening in 2025 found significant concern among applicants about having their proposals assessed by machines. Funders who implement AI without transparency risk damaging relationships with the organisations they exist to support.
Risk: Falling behind. On the other side, funders who reject AI entirely face growing practical challenges. The Blackbaud Institute reported that AI usage across UK charities rose from 57% in 2024 to 77% in 2025. Applicants are increasingly using AI to write proposals, which means application volumes will continue to rise. Funders without efficient processing capabilities will face longer decision times, higher administrative costs, and less capacity for the relational work that matters most.
Risk: Compliance gaps. Manual processes are more prone to inconsistency and human error. An AI system that checks every governance document against the same criteria is more reliable than a programme officer checking fifty documents on a Friday afternoon. The question is not whether mistakes happen, but whether you have a system that catches them consistently.
What Does This Look Like in Practice?
Consider a UK community foundation running a small grants programme — perhaps 200-400 applications per round for grants of £1,000-£10,000. In a traditional process, two programme officers might spend several weeks reading applications, chasing missing documents, preparing panel papers, and writing decision letters. In an AI-assisted process, the workflow might look like this:
Day 1-2: Applications close. AI checks eligibility criteria and flags any applications that do not meet basic requirements (wrong legal structure, outside the geographic area, requesting more than the maximum amount). Programme officers review the flagged applications to confirm.
Day 3-5: AI analyses due diligence documents — governance documents, safeguarding policies, accounts, bank statements — and produces a structured summary with flagged issues for each application. Programme officers review the summaries, focusing their time on applications where AI has identified concerns rather than reading every document from scratch.
Week 2: AI generates application summaries and preliminary assessments against the fund's criteria. Programme officers read the summaries alongside the original applications, adjust scores where their judgement differs, and prepare panel papers.
Week 3: Panel meets, reviews applications with the benefit of consistent summaries and structured scores, and makes funding decisions.
Week 3-4: AI drafts personalised feedback letters based on panel scores and application content. Programme officers review and adjust the feedback before it is sent to applicants.
This approach does not remove humans from the process. It removes the tedious, repetitive parts of the process so that humans can focus on the parts that require their expertise. Tools like Plinth are designed around exactly this model — AI handles document analysis, data extraction, and draft generation, while programme officers and panels retain full control over decisions and communications. Plinth's AI assistant, Pippin, reviews due diligence documents against detailed checklists covering governance, safeguarding, equality and diversity, accounts, and insurance, then presents findings with specific issues flagged by severity for human review. The platform also generates tailored feedback for applicants based on human assessor scores, which officers can edit before sending.
How Should You Govern AI Use in Your Grantmaking?
Good governance of AI in grantmaking starts with the same principles that govern good grantmaking generally: transparency, accountability, proportionality, and a commitment to the interests of applicants and beneficiaries.
Develop a clear AI policy. Only 16% of UK charities had an AI policy in place in 2025, up from 5% in 2024 (Blackbaud Institute, 2025). Foundations should lead by example. Your policy should state which tasks use AI, what oversight mechanisms are in place, how you handle data, and how applicants can query decisions.
Assign responsibility. Someone on your team — typically the programme director or operations lead — should be responsible for overseeing AI use, reviewing outputs regularly, and ensuring the system works as intended. AI governance is not an IT function; it is a programme quality function.
Audit for bias regularly. At least annually, review your AI-assisted decisions against your funding patterns. Are certain types of organisation consistently scored lower? Are due diligence flags disproportionately affecting smaller or newer charities? If so, adjust the system.
Maintain audit trails. Every AI output, every human override, and every final decision should be logged. This protects your organisation, demonstrates due diligence to regulators, and provides the data you need to improve over time.
Engage your board. Trustees need to understand how AI is being used and what safeguards are in place. This does not require technical expertise — it requires the same kind of informed oversight that trustees apply to investment decisions or safeguarding policies. The ACF has noted that foundations "can find ways to harness the potential benefits of AI safely — enhancing rather than replacing the uniquely human value that trustees, staff and stakeholders bring to their work."
What Does the Evidence Say About AI in Grantmaking Outcomes?
The evidence base for AI in grantmaking is still developing, which is itself an argument for cautious, well-governed adoption rather than wholesale transformation. What we do know comes from adjacent fields and from early adopters.
Processing efficiency. The clearest evidence is around time savings. Due diligence checks that take a programme officer 30-60 minutes per application can be completed in under a minute by AI, with the officer spending 5-10 minutes reviewing the output. For a fund receiving 300 applications, that is the difference between weeks and days of due diligence work.
Consistency. Structured AI-assisted scoring produces more consistent results than unstructured panel discussion. This does not mean AI scores are always correct — they are not — but they provide a consistent baseline that makes variations in human judgement more visible and discussable.
Applicant experience. Funders who provide personalised feedback see stronger reapplications and better relationships with their portfolio. AI-assisted feedback generation makes it practical to provide substantive feedback to every applicant, including unsuccessful ones — something that most funders acknowledge they should do but lack the capacity to deliver manually.
Portfolio intelligence. When application data, monitoring reports, and outcome data are structured rather than trapped in documents, funders can answer strategic questions about their portfolio. Plinth's impact analysis tools allow funders to query across all their applications and monitoring data using natural language, surfacing patterns that would take weeks to identify manually.
The Candid survey found that while only 1% of foundations currently use AI for screening decisions, 19% are actively considering it. The trajectory is clear, even if the pace remains measured.
FAQs
Do we need AI to run a good grants programme?
No. Many excellent funders operate effectively without AI, particularly those running small programmes with manageable application volumes. AI becomes most valuable when application numbers, due diligence requirements, or reporting volumes exceed what your team can handle consistently and promptly. The question is not whether you need AI, but whether your current capacity matches your ambitions.
Will applicants object to AI being used in our process?
Some may, particularly if they feel AI is making decisions rather than supporting them. Transparency is the key mitigation: tell applicants what role AI plays, make clear that humans make all funding decisions, and offer routes to query outcomes. The Blackbaud Institute found that 70% of charity sector professionals feel positively about AI when its use is disclosed and appropriate.
Can we use AI for some grant programmes but not others?
Yes, and this is often the most pragmatic approach. A high-volume small grants programme with clear eligibility criteria benefits enormously from AI triage and document review. A bespoke strategic partnership programme with five applicants may not need any AI involvement. Match the level of automation to the nature and scale of the programme.
How do we prevent AI from introducing bias into our decisions?
Start by acknowledging that your current process already contains biases — towards articulate applicants, towards familiar organisations, towards proposals that match your existing assumptions about what works. AI can replicate these biases if trained on historical data, but it can also make them visible and measurable in a way that unstructured human processes cannot. Regular audits of AI outputs against your equity objectives are essential.
What should we tell our board about AI in grantmaking?
Explain what specific tasks AI performs, what oversight mechanisms are in place, and what the benefits and risks are. Boards do not need to understand the technology — they need to understand the governance. Frame it as you would any operational change: what is the rationale, what are the safeguards, and how will you know if it is working?
Is it safe to use AI for due diligence checks?
AI is well-suited to document analysis tasks like reviewing governance documents, safeguarding policies, and accounts, provided a human reviews the output. The risk is not that AI will miss something a human would catch — in practice, AI is more consistent at checking against detailed criteria. The risk is that staff stop applying their own judgement because they trust the AI output without scrutiny. Maintain the expectation that programme officers review and validate AI findings.
How much does AI-assisted grantmaking cost?
Costs vary widely depending on the platform and the volume of applications. Some platforms, including Plinth, offer free tiers that include AI features, making it possible to trial AI-assisted grantmaking without significant upfront investment. The more relevant question is the cost of not using AI: longer processing times, less consistent assessments, and reduced capacity for the relational and strategic work that improves outcomes.
What happens if AI makes a mistake in our process?
The same thing that happens when a programme officer makes a mistake: it should be caught by the next person in the chain. This is why human-in-the-loop design matters. AI outputs should always be reviewed before they affect applicants. When errors are identified, they should be recorded and fed back to improve the system. No process — human or AI-assisted — is error-free; the question is whether errors are caught and corrected systematically.
Recommended Next Pages
- AI for Funders: The Future of Grantmaking — A broader look at how AI is changing foundation operations beyond grantmaking
- How to Automate Due Diligence in Grantmaking — Practical steps for implementing AI-assisted due diligence checks
- Human-in-the-Loop Grantmaking — A deeper dive into designing processes where AI supports rather than replaces human decisions
- How to Give Better Feedback to Grant Applicants — Why personalised feedback matters and how to deliver it at scale
- How Charities Experience the Application Process — The applicant perspective on grantmaking processes and technology
Last updated: February 2026