The Rise of AI in UK Grantmaking

How UK funders are adopting AI for eligibility checks, due diligence, application reviews and reporting — with real adoption data and practical lessons.

By Plinth Team

AI is no longer a future possibility for UK funders — it is already reshaping how grants are assessed, monitored and reported on. Over the past two years, a growing number of trusts, foundations and public-sector funders have moved from cautious curiosity to live pilots and full adoption.

The shift is being driven by practical pressures. Grant teams are processing more applications with the same (or fewer) staff. Reporting requirements from regulators and co-funders are increasing. And applicants themselves are already using AI to write applications, which means funders need better tools to assess them fairly.

According to the Charity Digital Skills Report 2025, 76% of UK charities are now using AI tools — up from 61% the previous year. On the funder side, CAST's AI for Grantmakers group has grown to over 400 people from more than 300 trusts and foundations, reflecting a sector-wide recognition that AI cannot be ignored.

This article explains where UK funders are seeing real results from AI, what the practical risks are, and how to get started without over-committing.

Where are UK funders actually using AI?

The most successful AI adoption in grantmaking targets repetitive, time-consuming tasks where consistency matters — not strategic judgement calls.

Eligibility screening is the most common starting point. AI can read an application, compare it against published criteria across categories like grant size, project location, target beneficiaries and allowed activities, and flag misalignments before a human reviewer starts. This catches basic ineligibility early and saves hours of reading applications that were never going to qualify.

Due diligence checks are the second major area. AI can pull data from the Charity Commission register and Companies House, then assess uploaded documents — governance records, safeguarding policies, accounts, insurance certificates, inspection reports — against standard checklists. Rather than replacing human judgement, this gives reviewers a pre-prepared summary with severity-ranked issues highlighted.

Application feedback is where AI adds the most value for applicants. Generating personalised, constructive feedback on unsuccessful applications has historically been too time-consuming. AI can draft feedback based on reviewer scores and notes, with tone controls that range from supportive to direct.

Reporting and monitoring is the emerging frontier. AI can summarise progress reports, compare them against workplan milestones, and surface patterns across a portfolio of grants.

Use caseWhat AI doesHuman roleTypical time saving
Eligibility screeningReads application against criteria, flags misalignmentsReviews flags, makes final decision60-80% of initial triage time
Due diligenceChecks registers, assesses documents, ranks issuesReviews findings, handles escalations50-70% of routine checks
Application feedbackDrafts personalised feedback from scores and notesEdits tone and content, approves75-90% of drafting time
Report summarisationExtracts key points from progress reportsValidates accuracy, follows up40-60% of reading time

What does AI due diligence look like in practice?

AI-powered due diligence is one of the most mature use cases because the checks are well-defined and the source data is structured.

A typical AI due diligence workflow covers 15 or more document types: governance documents, safeguarding policies, equality and diversity policies, insurance certificates, accounts, bank statements, financial transactions, quotes, inspection reports (Ofsted, CQC), budgets, risk assessments, project proposals and management accounts. For each, AI assesses quality and completeness, then generates severity-ranked issues — distinguishing between high-priority problems (missing DBS policy for work with children) and medium-priority gaps (insurance renewal date approaching).

The Charity Commission's 2024 blog post on AI noted that trustees should consider "how they might use AI and if the options available are right for their charity, thinking about the advantages and risks." For funders running due diligence, this means maintaining clear records of what was checked by AI versus by a person — creating the audit trail regulators expect.

Proportionality matters. Micro-grants under £5,000 might need only basic register checks, which AI can complete in seconds. Larger multi-year grants warrant deeper document review where AI handles the first pass and a reviewer focuses on judgement calls.

How is AI changing the application experience for charities?

AI adoption is a two-sided story. While funders adopt AI to process applications, charities are using AI to write them — and this is creating new dynamics.

The Charity Digital Skills Report 2025 found that more than a third of charities are now using AI in grant fundraising. This ranges from using ChatGPT to draft answers to using purpose-built tools that pull from actual programme data to generate evidence-backed responses.

The distinction matters. Generic AI-written applications tend to be fluent but vague — they read well but lack the specific data points funders need to assess impact. Data-connected AI tools produce stronger applications because they draw on real case studies, outcomes data and programme details.

For funders, this means:

  • Volume may increase. AI makes it faster to submit applications, so some programmes see more submissions.
  • Quality diverges. The best AI-assisted applications are stronger than before; the worst are polished but empty.
  • Assessment needs to evolve. Reviewers increasingly need to look past surface-level writing quality to evaluate substance and evidence.

IVAR has been exploring these dynamics through its collaboration with CAST, noting that funders need to adapt their assessment criteria as AI-written applications become the norm rather than the exception.

What are the risks of AI in grantmaking?

The risks are real but manageable with the right controls.

Bias is the primary concern. AI models can reflect biases present in training data or in historical funding patterns. If a system is trained on past decisions, it may perpetuate existing inequalities — favouring established organisations over grassroots groups, or urban projects over rural ones. The mitigation is human-in-the-loop review: AI assists, but a person makes the decision and can override AI recommendations.

Data protection requires careful attention. Grant applications contain personal data about beneficiaries, staff and trustees. Any AI system processing this data needs a documented approach aligned with UK GDPR and the Data Protection Act 2018. This means clear data processing agreements, understood retention periods, and — for cloud-based AI — knowing where data is processed geographically.

Transparency is both a regulatory and reputational issue. Applicants should know if and how AI is used in assessing their applications. The Charity Commission has emphasised that decisions must remain human-led, and several funders now include statements in their guidance about AI use in their processes.

Over-reliance is perhaps the most subtle risk. AI is good at pattern-matching and summarisation, but poor at understanding context, nuance and the relationships that underpin effective grantmaking. The most effective implementations use AI to handle the mechanical work so that human reviewers can spend more time on the judgement that actually matters.

How should a funder get started with AI?

Start small, with a single programme and a clearly bounded task.

Step 1: Choose a narrow use case. Eligibility screening or document checks are good starting points because the criteria are explicit and the output is easily verified. Avoid starting with subjective tasks like scoring quality of applications.

Step 2: Run a parallel pilot. Process applications with both the existing method and the AI tool. Compare results. This builds confidence and catches issues before they affect real decisions.

Step 3: Get governance right. Brief your board or trustees. Document a proportionate Data Protection Impact Assessment (DPIA). Update privacy notices for applicants. The Charity Commission's position is clear: trustees are responsible for understanding the tools their organisation uses.

Step 4: Train your team. The Charity Digital Skills Report 2025 found that over a third of charity CEOs had "poor AI skills, knowledge and confidence" and four in ten boards were similarly rated. Grant teams need practical training — not on AI theory, but on how to use and verify AI outputs in their specific workflow.

Step 5: Measure and iterate. Track time saved, error rates, applicant satisfaction and reviewer feedback. Use this data to decide whether to expand to other programmes or use cases.

What does the Charity Commission say about AI?

The Charity Commission has taken a pragmatic rather than prescriptive approach. In its 2024 guidance, it stated that it does not anticipate producing specific new guidance on AI, preferring to encourage trustees to apply existing duties to new technologies.

The core position is:

  • Trustees must understand the tools their charity uses. This includes AI.
  • Decisions must remain human-led. AI can inform and assist, but accountability rests with people.
  • Existing duties apply. Duties of care, prudence, and compliance with data protection law are not changed by adopting AI.
  • Proportionality is key. The level of governance around AI should match the significance of how it is being used.

For funders, this means there is no regulatory barrier to using AI in grantmaking, but there is a clear expectation that it is done thoughtfully, with appropriate oversight and documentation.

How are UK funders collaborating on AI?

The UK funding sector has a strong tradition of peer learning, and AI is no exception.

CAST's AI for Grantmakers group, established in December 2023, has grown to over 400 participants from more than 300 trusts and foundations. The group shares practical experiences — what works, what does not, and what governance structures are needed.

IVAR's Open and Trusting community, comprising over 150 grant-making organisations making grants worth over £1 billion annually, has been connecting its work on proportionate grantmaking with AI adoption. The overlap is significant: the same principles that underpin open and trusting grantmaking — proportionality, transparency, responsiveness — also guide responsible AI adoption.

London Funders has highlighted the UK Government's AI Opportunities Action Plan and its implications for the funding sector, encouraging funders to engage with the broader policy landscape rather than treating AI as a purely internal technology decision.

These networks mean that a funder getting started with AI does not need to figure everything out alone. Shared templates for DPIAs, AI policies and governance frameworks are increasingly available through sector bodies.

What does purpose-built AI for grantmaking look like?

Generic AI tools like ChatGPT can help with individual tasks, but purpose-built grantmaking AI is fundamentally different because it is connected to the data that matters.

A purpose-built system like Plinth integrates AI directly into the grant lifecycle. Rather than copying and pasting between tools, AI works within the platform:

  • Eligibility screening compares applications against funder criteria automatically, providing structured feedback with positive alignments and areas of concern.
  • Due diligence pulls from Charity Commission and Companies House registers, then assesses uploaded documents across governance, safeguarding, finance, insurance and more — generating severity-ranked findings.
  • Application feedback uses reviewer scores and notes to generate personalised letters, with configurable tone from supportive to direct.
  • Form building uses AI to help funders design application forms, set conditional logic, and manage multi-stage processes.
  • Workplan and KPI generation analyses approved applications to suggest realistic milestones, deliverables, targets and measurement methods.

The difference between generic and purpose-built AI is data context. When AI can draw on actual programme data, case studies and outcomes — rather than just the text of an application — it produces outputs that are grounded in evidence rather than plausible-sounding generalities.

Plinth offers a free tier, making it accessible to smaller funders running their first AI-assisted programme.

What is the digital divide in AI adoption?

Not all funders are adopting AI at the same pace, and the gap is widening.

The Charity Digital Skills Report 2025 highlights that 68% of small charities are still in the early stages of digital adoption. Squeezed finances are the biggest barrier — cited by 69% of respondents — followed by finding funds to invest in infrastructure (64%).

For grantmaking specifically, this creates a two-speed sector:

  • Large foundations with dedicated digital teams are running sophisticated AI pilots, building bespoke tools, and sharing learnings through networks.
  • Small and medium funders often lack the staff, budget or technical confidence to get started, even when affordable tools exist.

This matters because small and medium funders collectively distribute a significant proportion of UK charitable funding. If AI benefits accrue only to the largest funders, the efficiency gains and quality improvements will be unevenly distributed.

Addressing this gap requires affordable, purpose-built tools that do not require technical expertise to set up. It also requires sector bodies to continue their work making AI knowledge accessible and practical rather than abstract.

FAQs

Is AI only useful for large foundations with big grants teams?

No. Smaller funders often benefit most because they have less staff time to spend on routine tasks like eligibility screening and due diligence. Tools with free tiers make AI accessible regardless of budget.

Will AI replace human grant reviewers?

No. AI handles repetitive, mechanical tasks — checking registers, summarising documents, drafting feedback. Human reviewers remain essential for judgement, relationships and strategic decisions about what to fund.

Do applicants need to be told if AI is used in assessment?

Yes, as a matter of good practice and transparency. Several funders now include statements about AI use in their application guidance. Data protection law may also require disclosure depending on how AI processes personal data.

How long does an AI pilot take to set up?

A focused pilot — such as using AI for eligibility screening on one programme round — can be set up in weeks using an existing platform. Building a bespoke system from scratch takes months.

What about AI-generated applications — should funders be concerned?

Rather than banning AI use by applicants, funders should assess substance over surface quality. Applications backed by real data and evidence are stronger regardless of whether AI helped draft them.

Does the Charity Commission regulate AI use in grantmaking?

Not specifically. The Commission expects trustees to apply existing duties — care, prudence, data protection — to AI adoption. There are no AI-specific regulations for funders, but general data protection and charity law apply.

What data protection steps are needed?

At minimum: a proportionate Data Protection Impact Assessment, updated privacy notices for applicants, clear data processing agreements with AI providers, and documented retention policies. Align with UK GDPR and the Data Protection Act 2018.

Where can funders learn from peers?

CAST's AI for Grantmakers group (400+ participants), IVAR's Open and Trusting community (150+ funders), and London Funders all offer peer learning opportunities focused on practical AI adoption.

Recommended next pages


Last updated: February 2026