AI vs Human Bias in Grant Decisions

How bias shapes grant funding outcomes, where AI helps and where it introduces new risks, and practical steps funders can take to make decisions fairer.

By Plinth Team

Grant funding is supposed to be allocated on merit. In practice, decades of research show that who receives funding is shaped by factors that have nothing to do with the quality of the proposed work. A landmark study published in Science found that applications from white principal investigators to the US National Institutes of Health were 1.7 times more likely to be funded than those from Black researchers — a gap that persisted across follow-up studies over more than a decade (Ginther et al., 2011; Hoppe et al., 2019). In the UK, the Charity Commission reported in 2024 that 92% of charity trustees are white, while just 8% come from ethnic minority backgrounds — compared to 17% of the general population. When the people making funding decisions do not reflect the communities they serve, bias is structurally embedded before a single application is even read.

The arrival of AI in grantmaking promises to help. Automated scoring can apply criteria consistently, flag conflicts of interest, and process hundreds of applications without fatigue. But AI is not neutral. Models trained on historical data can encode the very biases they are meant to correct, and opaque algorithms can make it harder — not easier — to identify where unfairness enters the process.

This guide examines where bias appears in grant decisions, what AI can realistically do about it, and the practical safeguards that funders need regardless of whether they use technology or not.

Where Does Human Bias Enter Grant Decisions?

Human bias in grantmaking is not a matter of individual prejudice — it is a systemic issue driven by the way review processes are designed and who participates in them. Research on peer review in funding contexts has identified several recurring patterns.

Familiarity and prestige bias. Reviewers tend to score applications more favourably when they recognise the applicant organisation, its leadership, or its institutional affiliations. The US National Institutes of Health considered revamping its scoring process in 2024 specifically to reduce "reputational bias" — the tendency for reviewers to be influenced by an applicant's track record rather than the proposed work itself (Science, 2024). In UK philanthropy, smaller and newer charities face a similar disadvantage: funders gravitate toward organisations they have funded before, even when the evidence base for their work is no stronger than that of unfamiliar applicants.

Subjective interpretation of criteria. When assessment frameworks leave room for interpretation — what counts as "excellent" impact evidence, or "strong" community engagement — reviewers fill the gap with their own assumptions. The EDI Caucus report on peer review bias noted that "subjective interpretation of scoring criteria" is one of the most commonly identified sources of bias in funding processes (EDI Caucus, 2024).

Reviewer demographics. The composition of assessment panels matters. One study found that only 2.4% of NIH grant reviewers were Black, despite Black researchers making up a larger share of applicants. In the UK, the underrepresentation of younger people (just 8% of trustees are under 44, according to the Charity Commission) and people from ethnic minority backgrounds on foundation boards means that the people making decisions often have limited lived experience of the communities being served.

Fatigue and order effects. Reviewers reading dozens of applications in sequence are subject to cognitive fatigue. Research consistently shows that applications reviewed later in a session or batch receive less favourable scores, regardless of quality.

What Kinds of Bias Can AI Introduce?

AI does not eliminate bias — it changes where and how it appears. Understanding the specific risks is essential for any funder considering AI-assisted decision-making.

Training data bias. If an AI model learns from historical grant decisions, it learns the patterns of those decisions — including their biases. If applications from organisations in deprived areas were historically less likely to be funded, the model may replicate that pattern. This is the most well-documented risk in algorithmic decision-making, and it applies directly to grantmaking contexts.

Proxy discrimination. Even when demographic data is removed, AI models can use proxy variables — postcode, writing style, vocabulary complexity — to effectively reconstruct protected characteristics. An application written in plain English by a grassroots community group may score differently from one written in the polished language of a well-resourced charity, even if the underlying work is equally strong.

Opacity and explainability. When a human reviewer scores an application, they can (in principle) be asked to explain their reasoning. Many AI models, particularly large language models, produce outputs that are difficult to audit or explain. If an AI-assisted assessment tool recommends rejection, the funder needs to understand why — and so does the applicant.

Automation complacency. Research on human-AI interaction consistently shows that people tend to defer to automated recommendations, particularly when they are under time pressure. If a funder uses AI to pre-screen applications or generate draft assessment scores, reviewers may anchor on those outputs rather than conducting genuinely independent review.

How Do Human and AI Bias Compare in Practice?

The comparison between human and AI bias is not straightforward, because they operate differently and are measurable in different ways. The following table summarises the key distinctions.

DimensionHuman BiasAI Bias
ConsistencyVaries between reviewers and over time; affected by fatigue, mood, and order effectsHighly consistent for identical inputs; same application always receives the same score
SourcePersonal experience, cultural assumptions, familiarity, institutional prestigeTraining data, feature selection, proxy variables, model architecture
DetectabilityDifficult to measure in real time; requires post-hoc analysis of outcomesCan be tested systematically before deployment using audit datasets
TransparencyReviewers can explain their reasoning (though explanations may be post-hoc rationalisations)Many models are opaque; explanations require additional tooling
ScalabilityBias may increase with volume as fatigue sets inBias is constant regardless of volume
AdaptabilityCan respond to training, calibration, and feedbackRequires retraining or model updates to correct identified biases
AccountabilityIndividuals can be held responsible for decisionsResponsibility is diffused across developers, deployers, and users

A simulation study published in Research Policy found that even a small bias in peer review scores — as little as 1.9% of the total score — can produce statistically detectable differences in funding rates between groups. At a bias level of 2.8%, the disparities become significant enough to shift real money between preferred and non-preferred groups (Day, 2015). AI systems can introduce equivalent or larger biases silently, which is why systematic auditing matters.

What Practical Steps Reduce Human Bias in Grant Assessment?

The good news is that well-evidenced interventions exist to reduce human bias in grant decisions, and none of them require AI. These should be the foundation of any fairness strategy.

Structured scoring frameworks. Replace open-ended narrative assessments with explicit criteria and scoring scales. Each criterion should have defined anchor points — for example, what a score of 3 versus 5 looks like — so that reviewers are evaluating against a shared standard rather than their own internal benchmarks. This is the single most impactful change a funder can make to reduce subjective drift.

Calibration sessions. Before a review round begins, have all reviewers score the same sample application independently, then discuss their scores as a group. This surfaces hidden assumptions about what the criteria mean and brings reviewers into alignment. The Canadian Institutes of Health Research introduced reviewer training on unconscious bias and saw gender disparities in success rates disappear in the following grant cycle.

Blind or partially blind review. Removing applicant names, logos, and organisational identifiers from applications during initial assessment reduces the influence of familiarity and prestige bias. Where full blinding is impractical — for example, when the funder needs to know the applicant's geographic area — partial blinding still helps. IVAR's open and trusting grantmaking framework emphasises shifting power dynamics between funders and grantees, and blind review is one mechanism for doing so.

Diverse panels. Ensure assessment panels include people with different professional backgrounds, demographic characteristics, and lived experiences relevant to the fund's purpose. This does not guarantee unbiased decisions, but it reduces the risk that a single perspective dominates.

Conflict of interest protocols. Require reviewers to declare conflicts before seeing applications, and remove them from decisions where conflicts exist. Log all declarations as part of the audit trail.

How Can AI Be Used Responsibly to Support Fairer Decisions?

When deployed carefully, AI can complement human review in ways that genuinely reduce bias — but only if specific safeguards are in place.

Consistency checking. AI can flag when different reviewers have scored the same application very differently, or when a reviewer's scores across a batch show unusual patterns (for example, consistently lower scores for applications from a particular region). This does not replace human judgement — it highlights where human judgement may need a second look.

Standardised due diligence. AI can apply the same checks to every application — verifying charity registration, reviewing governance documents, checking financial accounts — without the shortcuts or assumptions that a human reviewer under time pressure might make. Tools like Plinth use AI to conduct structured due diligence reviews, examining governance documents, safeguarding policies, accounts, and equality policies against a consistent set of criteria. Because the AI applies the same prompts and checks to every applicant, it eliminates the variability that comes from different reviewers applying different levels of scrutiny.

Pattern analysis across rejected applications. Understanding why applications are rejected is as important as understanding why they are funded. AI can analyse rejection patterns across a fund to identify whether certain types of organisations — by size, geography, or thematic focus — are disproportionately unsuccessful. Plinth's rejection analytics dashboard does exactly this: it examines assessment scores, feedback, and application summaries across all rejected applications to surface common themes, identify applicant profiles, and generate recommendations for improving application guidance or eligibility criteria.

Draft assessment with human override. AI can generate initial assessment scores against structured criteria, giving human reviewers a starting point rather than a blank page. The critical safeguard is that reviewers must be able to — and be expected to — override, adjust, and supplement AI-generated scores with their own judgement. Plinth's AI assessment feature (called Pippin) generates suggested answers to each question on a funder's assessment form, with a justification for each. Reviewers can accept, reject, or modify each answer individually, maintaining full control over the final decision.

What Does an AI Audit Framework for Grant Decisions Look Like?

Any funder using AI in its decision-making process needs an audit framework. This is not optional — it is the mechanism by which the funder demonstrates that its process is fair, and by which problems are identified early enough to be corrected.

Pre-deployment testing. Before using an AI tool in live decisions, test it against a diverse set of historical applications where the outcomes are known. Check whether the tool's recommendations correlate with protected characteristics, geography, organisational size, or other factors that should not influence decisions. If they do, investigate and address the causes before proceeding.

Ongoing outcome monitoring. After deployment, track acceptance and rejection rates by demographic group, geography, theme, organisational size, and any other relevant dimension. Compare these to your fund's stated priorities and to the applicant pool. If 40% of your applicants are from the North of England but only 20% of awards go there, that is a signal worth investigating.

Explainability requirements. For every AI-assisted recommendation, the system should be able to produce an explanation that a non-technical reviewer can understand. This is not just good practice — it is increasingly likely to become a regulatory requirement as AI governance frameworks mature in the UK and EU.

Human-in-the-loop for all final decisions. No AI system should make the final funding decision. AI can inform, flag, summarise, and recommend — but the decision to fund or reject must always rest with a human who has reviewed the application and can be accountable for the outcome.

Periodic bias audits. At least annually, conduct a formal review of your AI-assisted process. This should include statistical analysis of outcomes, qualitative review of edge cases and appeals, and engagement with applicants about their experience of the process. Publish the findings, even when they are uncomfortable. The ACF has advocated for funders to complete annual racial justice audits of their grants, and AI-assisted processes should be subject to equivalent scrutiny.

Can Randomisation Reduce Bias Where Assessment Cannot?

An emerging approach to the problem of bias in grantmaking is partial randomisation — using a lottery to allocate funding among applications that meet a quality threshold. This may sound counterintuitive, but the logic is straightforward: if assessment panels cannot reliably distinguish between applications in the middle band of quality, and if that inability introduces bias, then randomisation may produce fairer outcomes than human or algorithmic judgement alone.

Nesta has explored this approach, noting that randomisation removes bias in the part of the process where "differences between quality are marginal" and where decisions are otherwise "influenced by prejudice." The New Zealand Health Research Council has distributed some of its Explorer Grants by lottery, and InnovateUK has used lotteries for small grant schemes.

In the UK, a small grants scheme reported in Nature has experimented with partial randomisation, with early results suggesting it boosted diversity among recipients (Nature, 2025). The approach works best for small grants where the cost of detailed assessment is high relative to the award amount — precisely the type of grantmaking where bias is hardest to detect and most consequential for small organisations.

Randomisation is not a replacement for assessment. It is one tool in a broader fairness strategy, most effective when combined with clear eligibility screening and minimum quality thresholds. UKRI reported that its award rate fell below 20% in 2024 as application volumes doubled over seven years (Nature, 2025), which means that the "middle band" of meritorious-but-unfunded applications is growing — and so is the scope for bias to determine which of those applications succeed.

What Should a Funder's Fairness Strategy Include?

Reducing bias requires a strategy, not a single intervention. The most effective approaches combine structural changes to the process, deliberate use of technology, and ongoing accountability mechanisms.

Process design. Start with the application form itself. Every question should have a clear purpose, and the information collected should map directly to the assessment criteria. Unnecessarily complex forms create barriers for smaller organisations and reward polished writing over substantive evidence. The guide on reducing the burden on grant applicants covers this in detail.

Technology selection. Choose tools that support structured, auditable decision-making. An end-to-end platform like Plinth — which includes a free tier — handles application collection, AI-assisted due diligence, structured assessment, monitoring, and reporting in one place. Because all decisions and their supporting evidence are recorded in the same system, the audit trail is built in rather than reconstructed after the fact.

Training and culture. Ensure all reviewers receive training on unconscious bias, and make calibration sessions a standard part of every grant round. Normalise the idea that bias is a systemic issue to be managed, not a personal failing to be ashamed of.

Feedback loops. Give applicants meaningful feedback on their applications, and create channels for them to report concerns about the process. Track feedback themes over time. If multiple applicants from a particular group report feeling that the process is unfair, take that seriously even if your quantitative data does not yet show a disparity.

Transparency. Publish your assessment criteria, your approach to managing bias, and aggregate data on your funding outcomes. The Foundation Practice Rating — an initiative supported by the ACF — assesses UK foundations on their transparency, accountability, and diversity practices. Funders that take bias seriously should be willing to be held accountable for their outcomes.

How Are UK Funders Approaching This in Practice?

The UK funding sector is at an early stage of addressing bias systematically, but there are encouraging signs of progress.

IVAR's open and trusting grantmaking initiative, which launched in 2021 with its eight commitments to funding charities in an open and trusting way, has been adopted by over 140 trusts and foundations. While the initiative is broader than bias alone, its emphasis on shifting power from funders to grantees, providing unrestricted funding, and reducing bureaucratic barriers directly addresses several of the structural conditions that allow bias to persist.

The ACF's Stronger Foundations programme includes a Diversity, Equity and Inclusion pillar that recommends funders collect and publish DEI data, implement DEI practices in funding activities, and maintain diverse trustee boards and staff. The DEI Data Standard provides a shared framework for capturing data on funding to groups experiencing structural inequity.

On the technology side, funders are increasingly using grant management platforms that enforce structured workflows. The shift from email-and-spreadsheet processes to platforms with consistent scoring frameworks, conflict-of-interest logging, and automated audit trails represents a significant step toward reducing the inconsistencies that allow bias to operate unchecked. The guide on building transparency into decisions explores this further.

UKRI's 2023-24 equalities monitoring report provides one of the most detailed public datasets on funding outcomes by demographic group in the UK. More funders publishing equivalent data — even at an aggregate level — would help the sector understand the scale of the problem and measure progress over time.

FAQs

Can AI eliminate bias in grant decisions?

No. AI can reduce certain types of inconsistency — such as variation between reviewers or fatigue effects — but it introduces its own biases through training data, proxy variables, and model design. The most effective approach uses AI to support, not replace, human decision-making, with ongoing monitoring of outcomes.

Is blind review practical for all types of grants?

Full blind review is practical for many small and medium grant schemes but becomes more difficult for large strategic grants where the funder needs to understand the applicant's context and capacity. Partial blinding — removing names and logos during initial scoring, then revealing identities for shortlisted applications — is a workable compromise for most programmes.

What should funders do if outcome data reveals demographic disparities?

First, investigate whether the disparity reflects a problem with the assessment process, the applicant pool, or the eligibility criteria. If fewer applications are received from a particular group, the issue may be outreach rather than assessment. If applications are received but disproportionately rejected, review a sample of edge-case decisions to understand why. Then adjust criteria, guidance, or support offers accordingly and track whether the changes have an effect.

How much does it cost to implement bias monitoring?

Basic outcome monitoring — tracking acceptance rates by geography, theme, and organisational size — can be done with data most funders already collect. More detailed demographic analysis requires collecting equality data from applicants, which involves consent and data protection considerations. The cost is primarily in staff time for analysis and reporting rather than in technology, especially if your grant management platform already captures structured data.

Should funders tell applicants they are using AI in assessment?

Yes. Transparency about AI use builds trust and allows applicants to raise concerns. A clear statement on your application form or website explaining what AI is used for, what it is not used for, and how human oversight is maintained is good practice and may become a regulatory requirement.

What is the difference between equity and equality in grantmaking?

Equality means treating all applicants the same. Equity means recognising that different applicants face different barriers and adjusting processes to account for those differences. For example, offering longer deadlines or simplified forms for smaller organisations, or actively reaching out to underrepresented communities. A fair grants programme aims for equity, not just equality.

How often should funders audit their processes for bias?

At minimum, annually. Conduct a statistical review of funding outcomes by relevant dimensions after each major grant round, and a more comprehensive audit — including qualitative review of edge cases and applicant feedback — at least once a year. Publish the findings as part of your annual fairness or impact report.

Are there UK legal requirements around bias in grant decisions?

Funders registered as charities must comply with the Equality Act 2010, which prohibits discrimination on the basis of protected characteristics. Public sector funders are additionally subject to the Public Sector Equality Duty, which requires them to have due regard to eliminating discrimination and advancing equality of opportunity. While there is not yet specific UK legislation on algorithmic bias in grantmaking, the ICO's guidance on AI and data protection is relevant for any funder using AI in its processes.

Recommended Next Pages


Last updated: February 2026