Ethical Considerations in AI Grantmaking
A practical guide to ethical AI in grantmaking covering governance, bias, transparency, data protection and human oversight for UK funders.
AI is reshaping how funders assess applications, monitor grants and report on impact. Used well, it reduces administrative burden, increases consistency and frees staff to focus on relationships with grantees. Used carelessly, it risks encoding bias, eroding trust and making opaque decisions about who receives funding and who does not. The ethical stakes are high because grantmaking decisions directly affect communities, and the power imbalance between funders and applicants means that harm can go unchallenged.
The Charity Digital Skills Report 2025 found that 76% of UK charities are now using AI tools, up from 61% in 2024 — yet only 16% of organisations have an AI policy fully in place. For funders, the gap between adoption and governance is a practical risk. A 2024 survey by the Center for Effective Philanthropy found that the chief barriers to responsible AI adoption in philanthropy include privacy and security concerns (55%), lack of necessary skills (43%) and uncertainty about relevant use cases (40%). These are not abstract problems. They are operational gaps that require clear policies, meaningful oversight and deliberate design choices.
This guide sets out what ethical AI in grantmaking actually looks like in practice: the governance structures, transparency standards, bias safeguards and legal requirements that every funder should have in place before deploying AI in funding decisions.
Why Does AI Ethics Matter for Funders?
AI ethics in grantmaking is not a theoretical concern — it is a fiduciary and reputational imperative. When a funder uses AI to triage applications, score eligibility or draft assessment summaries, the technology is influencing who receives money. If those processes contain hidden biases, produce unexplainable outputs or handle personal data carelessly, the funder is failing in its duty of care to both applicants and beneficiaries.
The Charity Commission's 2024 guidance on AI is clear: trustees remain responsible for decision-making, and charities may not be complying with their duties if they rely solely on AI-generated advice without undertaking reasonable independent checks. This applies equally to funders. The Commission expects human oversight to be in place to prevent material errors, particularly in how charities interact with beneficiaries and applicants.
There is also a power dynamic at play. As IVAR's work on open and trusting grantmaking has shown, the relationship between funders and applicants is inherently unequal. AI can either amplify that imbalance — by making decisions feel even more distant and impersonal — or reduce it, by standardising processes and making criteria more transparent. The ethical choices funders make about AI deployment determine which outcome they get.
The practical question is not whether to use AI. It is how to use it in a way that is fair, transparent, accountable and compliant with the law.
What Does Good AI Governance Look Like for Funders?
Good governance starts with policy. Before deploying any AI tool in grantmaking, funders need a written AI policy that covers the scope of AI use, the data it can access, the decisions it can influence and the safeguards in place. According to the ACF, only 18% of foundation members are already using AI in their day-to-day work, though a further 40% are exploring its possibilities. Those in the exploring phase have a window to establish governance before habits form.
A robust AI governance framework for funders should include:
- Scope and boundaries: Which processes use AI (e.g. eligibility screening, application summarisation, assessment support) and which are excluded (e.g. final funding decisions, appeals).
- Roles and responsibilities: Who is accountable for AI outputs — the grants officer, the programme lead, the board?
- Logging and audit: A record of what AI was asked, what it produced and what edits were made before any output was used in a decision.
- Review cycles: Regular (at least annual) reviews of AI performance, accuracy and fairness.
- Incident response: What happens when AI produces an error or a biased output, and how it is escalated and corrected.
The Charity Commission recommends that all charities consider whether having an internal AI policy would be beneficial so that it is clear how and when AI can be used in governance, by employees and in delivering services. For funders, this is not optional — it is a baseline requirement for responsible practice.
How Can Funders Prevent Bias in AI-Assisted Decisions?
Bias in AI-assisted grantmaking can originate from several sources: the training data an AI model was built on, the prompts and criteria a funder provides, the structure of application forms and the historical patterns embedded in past funding decisions. If a funder has historically favoured larger, more established organisations — perhaps because they write more polished applications — an AI system trained on or prompted with that history may replicate the same pattern.
Preventing bias requires active, ongoing work rather than a one-off check. Practical steps include:
- Structured scoring criteria: Define what a strong application looks like before AI is involved, so the technology is assessing against clear, published standards rather than learning from subjective preferences.
- Demographic monitoring: Track acceptance rates by organisation size, geography, ethnic leadership and thematic area. If patterns emerge that do not align with the fund's stated priorities, investigate.
- Prompt review: Regularly audit the instructions and context given to AI systems. Vague or leading prompts can introduce bias that is invisible to the end user.
- Diverse testing: Before deploying AI tools in live assessments, test them against a representative sample of past applications — including successful and unsuccessful ones from different types of applicants.
Research published by Oxford Academic in 2024 on algorithmic discrimination in public service provision found that citizens hold organisations responsible for biased outcomes regardless of whether the bias originated from a human or an algorithm. For funders, this means that "the AI did it" is not a defence. The funder is accountable for every output used in its processes.
Tools like Plinth address this by keeping AI in an advisory role — the platform's AI assistant, Pippin, generates assessment suggestions with explicit justifications for each answer, which grants officers review, edit and approve before any decision is recorded. The AI does not make decisions; it prepares structured drafts that humans evaluate.
What Are the Legal Requirements for AI in UK Grantmaking?
UK funders using AI to process personal data in grantmaking face specific legal obligations under the UK GDPR and the Data Protection Act 2018. The ICO's guidance on AI and data protection makes clear that in the vast majority of cases, the use of AI will trigger the legal requirement to undertake a Data Protection Impact Assessment (DPIA).
The key legal requirements are:
| Requirement | What It Means for Funders | Source |
|---|---|---|
| DPIA before deployment | Mandatory assessment of privacy risks whenever AI processes personal data in grant decisions | UK GDPR Article 35 |
| Lawful basis for processing | A clear legal basis (usually legitimate interests or consent) for using applicant data in AI systems | UK GDPR Article 6 |
| Article 22 protections | If AI makes decisions with legal or similarly significant effects, individuals have the right to human intervention | UK GDPR Article 22 |
| Transparency obligation | Applicants must be told if their data is used in automated decision-making, including the logic involved | UK GDPR Articles 13-14 |
| Data minimisation | Only the personal data that is necessary and proportionate should be processed by AI systems | UK GDPR Article 5 |
| ICO consultation | If a DPIA identifies high risk that cannot be mitigated, the funder must consult the ICO before proceeding | UK GDPR Article 36 |
Article 22 is particularly relevant for funders. It provides additional protections where solely automated decision-making has a legal or similarly significant effect on individuals. While most grant decisions involve human review (making them not "solely automated"), funders must ensure this human involvement is meaningful rather than a rubber stamp. If a grants officer simply approves every AI recommendation without substantive review, the process may effectively be solely automated in practice.
The EU AI Act, which began its phased implementation in February 2025, classifies AI systems used to evaluate eligibility for public assistance benefits and services as high-risk. While the Act does not directly apply to UK foundations post-Brexit, it sets an international benchmark that UK funders with European operations or ambitions should monitor. The full compliance framework for high-risk AI systems takes effect in August 2026.
How Should Funders Communicate AI Use to Applicants?
Transparency about AI use is both a legal obligation and a trust-building practice. Applicants have a right to know whether AI is involved in assessing their applications, and funders have a duty to explain what the technology does and does not do.
Effective transparency involves three layers:
Before the application: Publish a clear statement explaining which parts of the process involve AI. This might include: "We use AI to help summarise applications and check eligibility against our published criteria. All funding decisions are made by our grants team, with AI used as a support tool only." The Foundation Practice Rating, which assesses UK foundations on transparency, provides a useful benchmark for what good disclosure looks like.
During assessment: Where AI generates a summary, score or recommendation, the applicant-facing output should indicate this. If an applicant receives feedback that references AI-generated analysis, that should be stated explicitly.
After the decision: Applicants who are declined should have access to a meaningful explanation of why. If AI was used in the assessment, the funder should be able to explain what the AI considered and how the human reviewer reached their final decision. This is not just good practice — the UK GDPR requires that individuals understand the logic of automated processing that affects them.
Over 140 trusts and foundations are now working with IVAR's commitments for open and trusting grantmaking, which emphasise reducing power imbalances between funders and applicants. Being open about AI use is a natural extension of this commitment. Opacity about technology risks undermining the trust that the sector has been working to build.
What Does Human-in-the-Loop Mean in Practice?
Human-in-the-loop is the principle that AI should assist and inform, but never replace, human judgement in funding decisions. In practice, this means that every AI output used in a grantmaking decision must be reviewed, and potentially edited, by a qualified person before it has any effect.
This is more than a compliance checkbox. Meaningful human oversight requires:
- Sufficient time: Reviewers need enough time to actually read and evaluate AI outputs, not just click "approve" under time pressure.
- Relevant expertise: The person reviewing AI outputs should understand the funding criteria, the applicant context and the limitations of the technology.
- Authority to override: Reviewers must be empowered to disagree with, edit or reject AI recommendations without friction or penalty.
- Documented edits: Changes made to AI outputs should be logged, creating a clear record of where human judgement diverged from the AI suggestion.
In Plinth's grant management workflow, this is built into the process. The AI assistant generates draft assessment answers with justifications for each response. Grants officers can accept, reject or edit each answer individually, and the platform records these decisions. The AI never submits an assessment — only the human reviewer can do that. The verbosity of AI justifications can be configured from brief single-sentence explanations to comprehensive multi-sentence rationales, giving funders control over the level of detail the AI provides.
For a deeper exploration of this principle, see our guide to human-in-the-loop grantmaking.
How Should Funders Handle Data Protection in AI Systems?
Data protection in AI-powered grantmaking goes beyond standard GDPR compliance. The volume of personal data processed, the sensitivity of the information (which may include financial details, organisational leadership and beneficiary demographics) and the consequential nature of funding decisions all demand elevated safeguards.
Practical data protection measures for funders using AI include:
- Data minimisation by design: Only pass the data that is genuinely needed for the AI task. If AI is summarising an application, it does not need the applicant's bank details.
- Purpose limitation: Data collected for grant assessment should not be repurposed for other AI applications without a separate lawful basis.
- Access controls: Limit who can access AI-processed data. Role-based permissions ensure that only authorised staff see AI outputs related to their grants.
- Data processing agreements: Where AI tools are provided by third parties, ensure contracts specify where data is processed, how it is stored and what happens to it after use.
- Retention policies: Define how long AI-processed data and AI outputs are retained, and ensure they are deleted in line with your data retention schedule.
The ICO requires organisations to document their approach to data protection when using AI, and to be able to demonstrate compliance. This means maintaining records of processing activities that specifically address AI use, not just treating it as part of general operations.
For more detail on technical controls, see our guide to data security in AI-powered grant systems.
How Do Ethical AI Practices Affect Equity and Inclusion?
The design choices funders make about AI have direct implications for equity. Application processes that rely on polished written English, for example, may disadvantage organisations led by non-native English speakers, smaller grassroots groups without dedicated bid writers, or neurodivergent applicants. AI can either compound these barriers or reduce them — depending on how it is deployed.
AI-assisted approaches that improve equity include:
- Flexible input formats: Accepting voice recordings, video submissions or simplified forms alongside traditional written applications. AI can then structure and standardise these inputs for assessment, levelling the playing field between organisations with different resources and capabilities.
- Consistent assessment: AI that scores against published criteria treats every application the same way, reducing the risk of unconscious bias that can affect human reviewers — such as being influenced by an applicant's writing style rather than the substance of their proposal.
- Reduced application burden: The ACF has noted that the rise of AI-assisted grant writing — with one in five charities already using AI to research or write funding applications — has implications for how funders assess applications. Ethical funders should design processes that focus on the quality of the proposed work, not the quality of the writing.
- Proactive monitoring: Regularly reviewing who applies, who is funded and who is not, broken down by geography, organisation size, ethnic leadership and other relevant characteristics.
However, AI can also harm equity if it is poorly implemented. Models trained predominantly on data from large, well-resourced organisations may systematically undervalue applications from smaller or newer groups. Funders must actively test for and correct these patterns.
For a broader discussion of equity in AI-assisted grantmaking, see our guide on AI vs human bias in grant decisions.
Building an Ethical AI Policy: A Step-by-Step Approach
Creating an AI ethics policy does not require a technology team or a six-month consultation. It requires honest assessment, clear decisions and a commitment to iteration. The Charity Digital Skills Report 2025 found that the proportion of charities developing an AI policy tripled in a single year, from 16% to 48%. Funders should be leading this trend, not following it.
A practical approach to building an ethical AI policy:
Step 1 — Audit current use. Map where AI is already being used in your grantmaking, including informal use by individual staff (such as using ChatGPT to draft assessments). Many organisations discover AI is already in use before any policy exists.
Step 2 — Define scope and boundaries. Decide which processes AI may assist with and which it may not. Be specific: "AI may summarise applications and suggest eligibility assessments but may not generate final funding recommendations."
Step 3 — Establish human oversight requirements. For each AI-assisted process, define who reviews the output, what training they need and how disagreements with AI are recorded.
Step 4 — Complete a DPIA. The ICO's DPIA template provides a structured framework for assessing privacy risks. This is a legal requirement, not an optional extra.
Step 5 — Draft applicant-facing statements. Write clear, plain-language explanations of how AI is used in your process. Test these with applicants to check they are genuinely understandable.
Step 6 — Set review cycles. Commit to reviewing AI performance and fairness at least annually, with the results reported to trustees or the board.
Step 7 — Publish and iterate. Share your policy publicly where appropriate. Treat it as a living document that evolves as your AI use matures and as the regulatory landscape develops.
Platforms like Plinth support this approach by providing structured AI workflows with built-in review steps, configurable assessment criteria and recorded decision trails — making it easier to demonstrate compliance and accountability. Plinth offers a free tier, so funders can explore these capabilities without budget commitment.
Comparing Ethical AI Approaches in Grantmaking
Different funders will adopt AI at different levels of depth. The table below compares three common approaches and their ethical implications.
| Dimension | No AI (Manual Only) | AI-Assisted (Human Reviews AI) | AI-Led (Minimal Human Review) |
|---|---|---|---|
| Consistency | Variable — depends on reviewer fatigue and workload | High — AI applies criteria uniformly, humans check for context | High, but blind spots go undetected |
| Bias risk | Unconscious human bias; varies by reviewer | Reduced by standardisation; monitored through audits | Encoded bias may be systematic and harder to spot |
| Transparency | Easy to explain; no technology to disclose | Moderate — requires clear communication about AI role | Low — applicants may not understand how decisions were reached |
| GDPR compliance | Standard data protection applies | DPIA required; Article 22 safeguards if automated elements present | Full Article 22 compliance required; ICO consultation likely needed |
| Scalability | Limited by staff capacity | High — AI handles volume; staff focus on judgement | Very high, but at ethical cost |
| Applicant trust | High if feedback is personal | High if AI role is communicated openly | Likely low — impersonal and opaque |
| Recommended for | Small funders with few applications | Most funders — balances efficiency with accountability | Not recommended for funding decisions |
The AI-assisted model, where humans review and approve AI outputs, is the approach that best balances efficiency, fairness and compliance. It is the model used by Plinth and recommended by the Charity Commission's guidance on trustee decision-making.
FAQs
Can AI make final grant funding decisions?
No. The Charity Commission expects trustees and staff to remain responsible for funding decisions. AI can support the process — summarising applications, checking eligibility and drafting assessments — but the final decision must be made by a qualified human who has reviewed the evidence. This is also a requirement under UK GDPR Article 22 where decisions have significant effects on individuals.
Do we need a Data Protection Impact Assessment to use AI in grantmaking?
In almost all cases, yes. The ICO's guidance states that the use of AI will trigger the DPIA requirement in the vast majority of cases, particularly where personal data is being processed to inform decisions about individuals or organisations. The DPIA should be completed before the AI tool is deployed, not after.
How do we tell applicants we are using AI?
Be direct and specific. Publish a statement on your application portal and in your guidance documents explaining what AI does in your process. For example: "We use AI to help our team summarise applications and check them against our published eligibility criteria. All funding decisions are made by our grants team." Avoid vague language like "we may use technology to support our processes."
What if our AI produces a biased outcome?
Treat it as you would any process failure. Investigate the cause, correct the affected decisions where possible, update the AI prompts or configuration to prevent recurrence and document the incident. If the bias affected funding decisions, consider whether applicants need to be notified and whether the affected applications should be reassessed.
Is it ethical to use AI to screen out ineligible applications?
Yes, provided the eligibility criteria are clearly published, the AI screening is tested for accuracy and there is a mechanism for applicants to challenge an incorrect screening decision. AI eligibility screening can actually improve fairness by applying criteria consistently, but it must be regularly audited against manual checks to ensure accuracy.
Do we need to disclose which AI tools we use?
There is no legal requirement to name specific tools, but transparency about the nature and scope of AI use is required under UK GDPR. Best practice is to describe what the AI does rather than naming the product — applicants care about how their data is used and how decisions are made, not which vendor you chose.
How often should we review our AI ethics policy?
At minimum, annually. In practice, review whenever you change AI tools, expand AI into new processes or receive feedback suggesting the current approach is not working. The regulatory landscape is also evolving — the EU AI Act's high-risk provisions take full effect in August 2026 — so policies need to keep pace with legal developments.
Can small funders afford to implement ethical AI governance?
Yes. Ethical AI governance does not require expensive consultants or dedicated technology teams. It requires clear policy decisions, published statements, regular reviews and meaningful human oversight. Platforms like Plinth provide built-in governance features — structured AI workflows, review steps and decision logging — with a free tier available for smaller funders.
Recommended Next Pages
- Human-in-the-Loop Grantmaking: Why It Matters — How to balance AI automation with human judgement in funding decisions
- AI vs Human Bias in Grant Decisions — Practical steps to make AI-assisted processes fairer
- Data Security in AI-Powered Grant Systems — Encryption, access controls and trust measures for funders using AI
- How to Build Transparency into Grant Decisions — Communicating choices clearly with published criteria and audit trails
- AI for Grantmakers: Opportunities and Risks — A broader look at how AI is changing the funding landscape
Last updated: February 2026