Technology & Data

AI in the charity sector: where should the human stay in the loop?

UK charities are adopting AI rapidly, but algorithmic decision-making about vulnerable people, AI-written grant applications, and digital exclusion raise urgent ethical questions.

By Tom Neill-Eagle

The debate in brief

Three-quarters of UK charities are now using AI tools, up from around a third just two years ago. The adoption curve is steep and accelerating. AI is drafting grant applications, summarising casework, triaging service users, and generating fundraising content. For under-resourced organisations doing vital work, the efficiency gains are real.

But charities exist to serve vulnerable people, and the ethical stakes of getting AI wrong in this context are higher than in almost any other sector. Algorithmic systems trained on biased data can entrench the very inequalities charities exist to challenge. AI-generated grant applications risk undermining the trust relationship between funders and applicants. And the people most likely to be affected by AI-driven decisions are the least likely to have the digital access or literacy to understand or challenge them.

Quick takeaways

QuestionAnswer
How many UK charities use AI?76% are using AI tools in some form, according to the 2025 Charity Digital Skills Report, up from 61% in 2024 and 35% in 2023.
Are charities using AI in frontline services?Barely. Only 8% of charities report using AI tools in service delivery, despite widespread experimentation in back-office functions.
Can charities use AI to write grant applications?Yes, but funders are increasingly flagging concerns. IVAR warns that AI-generated applications obscure organisational voice and make it harder for funders to assess authenticity and capacity.
What about GDPR?Article 22 of the UK GDPR restricts solely automated decision-making that has legal or similarly significant effects on individuals. Charities processing beneficiary data through AI tools must comply with the same data protection standards as any other processing.
Do charities have AI policies?48% are developing one, triple the figure from 2024. But that still leaves the majority without formal governance around a technology they are already using.
What are the main concerns?Data privacy, factual accuracy, bias and discrimination (cited by 32% of charities overall, rising to 49% among LGBTQIA+-led organisations), and the risk of deepening digital exclusion for the people charities serve.

The arguments

The case for AI adoption

Charities operate in a funding environment that demands more output from less resource. The employer National Insurance Contribution increase alone added an estimated 1.4 billion pounds in costs across the sector from April 2025. AI tools that automate administrative tasks, draft communications, and analyse data can free up staff time for the relational work that no algorithm can replace.

For small charities, the potential is significant. AI bid-writing tools can help organisations without dedicated fundraising staff structure stronger applications, reducing the advantage that well-resourced charities have in competitive funding processes. The Charity AI Task Force, launched in February 2025 by CAST and Zoe Amar Digital with more than 20 member organisations, is working to ensure the sector shapes AI adoption rather than simply reacting to it.

The strongest version of this argument is not that AI is a luxury but that refusing to engage with it is itself a risk. Charities that fall behind on digital capability may find themselves unable to compete for funding or deliver services to the standard that commissioners expect.

The case for extreme caution

The charity sector works with people in crisis: those experiencing homelessness, domestic abuse, mental health emergencies, addiction, poverty. An algorithm that triages someone incorrectly, deprioritises a safeguarding referral, or reproduces historical bias in resource allocation can cause direct harm to people with the least capacity to challenge it.

The evidence from adjacent sectors is cautionary. A DWP algorithm wrongly flagged over 200,000 housing benefit claims as high risk, with around two-thirds of those flagged having no fraud or error. The 2020 A-level grading algorithm systematically disadvantaged students from lower-income areas. These are documented failures, not hypothetical risks.

DataKind UK has been explicit: removing human intervention entirely has already proven harmful when supporting vulnerable people. The potential for bias is compounded by the data charities hold. Case management records, referral histories, and service usage data all reflect existing inequalities. An AI system trained on this data does not correct for those inequalities; it encodes them.

The digital exclusion problem

The people charities serve are disproportionately digitally excluded. Good Things Foundation data shows that 15% of UK adults (approximately 7.9 million people) lack foundation-level essential digital skills, with those most affected being older adults, disabled people, and those without formal qualifications — groups that overlap heavily with charity beneficiaries. AI may make charities more efficient at serving those who can engage digitally, while widening the gap for those who cannot. The Charity Digital Skills Report 2025 found that 68% of small charities remain in early stages of digital adoption, suggesting the sector itself is not uniformly ready for AI, let alone the populations it serves.

The evidence

The 2025 Charity Digital Skills Report, based on 672 responses, provides the most comprehensive snapshot. AI use has more than doubled in two years, from 35% in 2023 to 76% in 2025. Among large charities, 81% are experimenting with AI in everyday work. But 58% express concerns about the implications, rising to 75% among large organisations. The risk of bias and discrimination is cited as a barrier by 32% of charities overall, rising sharply among organisations led by marginalised communities.

The gap between experimentation and deployment is telling. While three-quarters of charities are trying AI tools, only 8% use them in service delivery. The sector is using AI for back-office tasks rather than for decisions that directly affect beneficiaries.

On grant applications, IVAR found that AI-generated proposals obscure authorship and intent, undermining funders' ability to assess organisational voice and delivery capacity. More than 50% of ACF members reported experimenting with AI, but over 40% said they had no plans to use it, revealing a divided funder landscape.

The Fundraising Regulator published its first AI guidance in December 2025, requiring trustee accountability for AI use in fundraising and proportionate human oversight. The Charity Commission's April 2024 blog post was more measured, stating that it does not anticipate producing specific AI guidance but expects trustees to apply existing duties to new technologies.

Current context

The regulatory picture is developing rapidly. The Fundraising Regulator's December 2025 guidance was the first sector-specific AI regulatory intervention. The ICO's guidance on AI and data protection, co-badged with the Alan Turing Institute, covers requirements around Article 22 of the UK GDPR on solely automated decision-making. A joint statement from UK research funders including NIHR and Wellcome established that generative AI must not be used in peer-reviewing grant applications, though it may be used in preparing them if clearly acknowledged.

In Parliament, the Public Authority Algorithmic and Automated Decision-Making Systems Bill was debated in the House of Lords in December 2024, seeking transparency requirements when algorithms are used in public authority decisions — with implications for charities delivering statutory services under contract.

The proportion of charities developing an AI policy has tripled in a year, from 16% to 48%, but the majority still lack formal governance for a technology already embedded in their operations. The pace of adoption is outstripping the pace of governance.

Last updated: April 2026

What this means for charities

Charity leaders need to distinguish between AI uses that are low-risk and those that are high-stakes. Using AI to draft a newsletter or summarise meeting notes is qualitatively different from using it to triage safeguarding referrals or allocate service resources. The former needs a basic policy and quality checks; the latter needs robust governance, bias testing, transparency to beneficiaries, and genuine human decision-making authority.

Every charity using AI should have a written policy, approved by trustees, that covers what tools are permitted, what data may be shared with AI services, and where human oversight is required. The Charity Commission has been clear that trustees remain responsible for decisions regardless of the tools used to inform them.

On grant applications, charities should be honest with funders about AI use rather than risk the trust damage of being perceived as masking it. And organisations working with digitally excluded populations must ensure that AI-driven efficiency gains do not come at the cost of access for the people who need services most.

Common questions

Can charities use AI to write grant applications?

Yes, but with caveats. IVAR has warned that AI-generated proposals obscure organisational voice and make it harder for funders to assess authenticity. UK research funders including NIHR require that generative AI use in applications is clearly acknowledged. The business model question is also live: Plinth launched an AI grant-writing tool on a no-win, no-fee basis in 2024, charging 2% of successful grants, but dropped the commission model within months after sector backlash and limited take-up — settling on a flat subscription instead. The practical advice is to use AI as a drafting aid, not a substitute for genuine organisational thinking, and to be transparent about its use.

Does GDPR apply to charities using AI?

Yes. Article 22 of the UK GDPR provides specific protections against solely automated decision-making that has legal or similarly significant effects on individuals. The ICO's guidance on AI and data protection sets out detailed requirements for transparency, fairness, and accountability. Charities pasting beneficiary data into public AI tools risk breaching data protection obligations.

Should charity trustees worry about AI bias?

Yes. AI systems trained on historical data reproduce and amplify existing biases. In the charity context, where data reflects patterns of deprivation and inequality, this risk is acute. DataKind UK has warned that removing human intervention from automated services has proven harmful when supporting vulnerable people. The Charity Commission expects human oversight to prevent material errors.

What AI policy should a charity have?

The Fundraising Regulator recommends that charities publish AI policies covering which tools are used, how human oversight is maintained, and how data is protected. A useful policy covers permitted tools, data protection requirements, where human review is mandatory, how AI use is disclosed to beneficiaries and funders, and board accountability. The 2025 Charity Digital Skills Report found 48% of charities are developing an AI policy, up from 16% in 2024.

Are funders using AI to assess grant applications?

Some are beginning to. IVAR found that funders could use AI to triage applications, identify keywords, or cluster themes. However, over 40% of ACF members reported no plans to use AI, and the UK research funders' joint statement prohibits generative AI in peer review. The funder landscape remains divided.

What about digital exclusion?

Good Things Foundation data shows 7.9 million UK adults lack foundation-level digital skills, with the most affected being older adults, disabled people, and those without qualifications. When charities automate services, they must ensure alternative access routes exist. AI-driven efficiency that reduces access for vulnerable people is a net harm, not a gain.

Key sources and further reading

  • Charity Digital Skills Report 2025 — Zoe Amar Digital and partners, 2025. The annual barometer of digital and AI adoption across UK charities, covering 672 respondents. The primary source for sector-wide AI adoption statistics.

  • Charities and Artificial Intelligence — Charity Commission blog, April 2024. The regulator's position on AI use by charities, emphasising trustee responsibility and human oversight.

  • Guidance for Using Artificial Intelligence in Fundraising — Fundraising Regulator, December 2025. The first sector-specific AI regulatory guidance, covering accountability, transparency, and human oversight in fundraising.

  • AI and the Future of Funding Applications — IVAR, 2025. Analysis of how AI is changing grant applications, covering risks to authenticity, increased volumes, and implications for funders and applicants.

  • Guidance on AI and Data Protection — ICO, updated 2025. Comprehensive guidance on applying UK GDPR principles to AI systems, including requirements under Article 22 on automated decision-making.

  • Digital Nation — Good Things Foundation, updated July 2025. Data on digital exclusion across the UK, including the 7.9 million adults lacking foundation-level digital skills.

  • AI and the Third Sector: Catch-22? — DataKind UK. Analysis of the tension between AI's potential benefits for charities and the risks of applying it to decisions about vulnerable people.

  • Joint Statement on Use of AI Tools in Funding Applications — NIHR, Wellcome, and UK research funders, 2024. Agreed position that generative AI must not be used in peer review but may be used in application preparation if acknowledged.

  • AI Playbook for Charities — Make Sense Of It, February 2025. Practical guidance for UK charities beginning to adopt AI tools, covering responsible use and governance.

  • Public Authority Algorithmic and Automated Decision-Making Systems Bill — House of Lords debate, December 2024. Parliamentary consideration of transparency requirements for algorithmic decision-making in public services.

Researched and drafted with Pippin, Plinth's AI research tool. All statistics independently verified.