Human-in-the-Loop Grantmaking: Why It Matters

Human-in-the-loop grantmaking keeps people in charge of funding decisions while using AI for speed and consistency. Learn practical models, UK legal requirements and real workflows.

By Plinth Team

Artificial intelligence is changing how funders process, assess and decide on grant applications. But the question that keeps surfacing in boardrooms and programme teams is not whether to use AI -- it is where humans must stay firmly in control.

Human-in-the-loop (HITL) grantmaking is the principle that AI handles speed, consistency and pattern recognition while people retain authority over judgements, ethics and final decisions. It is not a compromise between automation and tradition. It is the only model that meets legal requirements, maintains applicant trust and actually improves decision quality.

The stakes are high. According to the Technology Association of Grantmakers (TAG) 2024 State of Philanthropy Tech survey, 81 per cent of foundations now report some degree of AI usage, yet enterprise-wide adoption sits at just 4 per cent (TAG, 2024). That gap reflects a sector still working out where to draw the line. Meanwhile, UK data protection law already draws one: Article 22 of the UK GDPR restricts solely automated decisions that produce legal or similarly significant effects on individuals. A grant decision that determines whether an organisation receives funding almost certainly qualifies.

This guide sets out a practical framework for keeping humans in the loop at every stage of the grant lifecycle -- from eligibility screening through to feedback and reporting -- without losing the efficiency gains that AI offers.

What does human-in-the-loop actually mean?

Human-in-the-loop is a design pattern where AI systems assist but never replace human decision-makers. The concept originated in manufacturing and aviation safety, where automated systems handle routine operations while trained operators manage exceptions and final approvals.

In grantmaking, this translates to a clear division of labour. AI handles tasks that benefit from speed and consistency: parsing documents, checking eligibility criteria, summarising long applications and flagging anomalies. Humans handle tasks that require judgement, context and accountability: interpreting evidence, weighing competing priorities, making award decisions and communicating outcomes.

The distinction matters because grantmaking involves decisions that materially affect organisations and communities. A rejected application can mean a youth service closes or a food bank runs out of supplies. These are not decisions to delegate to a probability score.

There are three levels of human involvement commonly referenced in AI governance:

  • Human-in-the-loop: A person reviews and approves every AI output before it takes effect.
  • Human-on-the-loop: A person monitors the system and can intervene, but does not review every individual output.
  • Human-out-of-the-loop: The system operates autonomously with no human review.

For grant funding decisions, only the first model -- human-in-the-loop -- meets current legal and ethical standards in the UK.

Why fully automated grant decisions are problematic

The appeal of full automation is obvious. The UK government awarded around 153.2 billion pounds in grants in 2023-24 alone (UK Government Grants Management Function). Managing that volume with manual processes is expensive -- estimates suggest up to 40 per cent of grant resources are consumed by administration rather than delivery.

But automating the decision itself, rather than the administration around it, creates three serious problems.

Legal risk. Article 22 of the UK GDPR gives individuals the right not to be subject to decisions based solely on automated processing that produce legal effects or similarly significant effects on them (ICO, 2025). Grant decisions clearly fall within scope. If your system auto-rejects applications without meaningful human review, you are likely breaching data protection law. The EU AI Act, which comes into full force in August 2026, goes further -- classifying AI systems used to evaluate eligibility for essential public services as high-risk, with mandatory human oversight requirements.

Bias amplification. AI models trained on historical data can encode and amplify existing biases. If past funding patterns favoured well-resourced organisations with polished applications, an AI system trained on those patterns will reproduce that preference. IVAR has highlighted that well-resourced organisations may gain disproportionate advantage through access to paid AI tools and specialist training, risking widened inequities (IVAR, 2025).

Accountability gaps. When a human makes a flawed decision, there is a person to hold accountable, a reasoning process to examine and a lesson to learn. When an algorithm makes a flawed decision, the accountability chain is murkier. Applicants deserve to know who decided and why -- and to have a meaningful route to challenge the outcome.

Where AI adds genuine value in the grant lifecycle

The most effective use of AI in grantmaking is not making decisions -- it is making the humans who decide faster, better informed and more consistent. Here is where AI demonstrably helps:

Eligibility screening. AI can check applications against published criteria -- registered charity status, geographic scope, thematic fit -- and flag those that clearly do or do not qualify. This saves hours of manual triage on high-volume funds. A 2026 Virtuous survey of 346 nonprofits found that 92 per cent of nonprofits use AI and 79 per cent report efficiency gains, though only 7 per cent report major improvements in organisational capability (Virtuous, 2026).

Application summarisation. For funds receiving hundreds of applications, AI-generated summaries help assessors quickly grasp the core proposal before reading the full text. This reduces cognitive load without removing the human from the assessment.

Due diligence checks. AI can parse governance documents, safeguarding policies and financial accounts to flag gaps or concerns for a human reviewer to investigate. This is faster and more consistent than manual document review across dozens of applications.

Consistency analysis. AI can identify when similar applications are being scored differently by different assessors, enabling calibration conversations. This does not override assessor judgement -- it highlights where judgements diverge.

Feedback drafting. Writing personalised feedback to every applicant is time-consuming, which is why many funders do not do it. AI can draft feedback based on assessment scores and criteria, for a human to review and send. This is particularly valuable for unsuccessful applicants, who benefit most from clear, constructive reasons.

Reporting and narrative generation. AI can pull together impact data, case studies and outcome metrics into draft reports, saving significant time on funder reporting cycles.

A practical model for human-in-the-loop grantmaking

Implementing HITL grantmaking requires defining, for each stage of your workflow, what the AI does, what the human does, and what gets logged. The following table sets out a workable model:

Grant stageAI roleHuman roleWhat to log
Eligibility screeningChecks criteria, flags pass/failReviews borderline cases, overrides where neededCriteria checked, AI recommendation, human decision
Application summarisationGenerates structured summaryReads summary and full application for shortlisted bidsSummary generated, assessor confirmation of review
AssessmentDrafts structured answers against scoring criteriaReviews, edits and finalises scoresAI draft, human edits, final scores with reasoning
Due diligenceParses documents, flags gapsInvestigates flagged issues, makes compliance judgementDocuments checked, issues flagged, human resolution
Panel decisionPrepares briefing packs, highlights key evidenceDiscusses, debates, decidesPanel minutes, decision rationale, any dissent
Feedback to applicantsDrafts personalised feedback from scoresReviews tone, accuracy and fairness before sendingDraft, edits, final version sent
Monitoring and reportingAggregates data, drafts narrativeReviews accuracy, adds context and interpretationData sources, draft, final report

The critical principle is that AI outputs are always treated as drafts. No AI-generated assessment score, eligibility decision or feedback letter should reach an applicant without a human having reviewed and accepted or edited it.

What does UK law actually require?

UK grantmakers using AI need to understand three overlapping legal frameworks.

UK GDPR Article 22. The Information Commissioner's Office (ICO) is clear: individuals have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects or similarly significant effects (ICO, 2025). If your AI system can reject an application without any human review, Article 22 applies. You must either ensure meaningful human involvement in the decision, obtain explicit consent, or demonstrate that the processing is necessary for a contract or authorised by law. In practice, meaningful human involvement -- not rubber-stamping -- is the most robust approach.

The Equality Act 2010. If your AI system produces outcomes that disproportionately disadvantage applicants from protected groups, you face discrimination claims regardless of whether a human or machine made the decision. Regular equalities monitoring of AI-assisted decisions is essential.

The EU AI Act (from August 2026). While the UK is not directly bound by EU law post-Brexit, any funder operating across borders or processing data of EU-based applicants should be aware. The Act classifies AI systems used to evaluate eligibility for essential public services and benefits as high-risk, requiring human oversight, transparency and regular auditing. UK regulatory direction is expected to align broadly with these principles.

The practical implication is straightforward: build your AI-assisted processes so that a qualified human makes every grant decision, with access to the underlying evidence and the ability to override any AI recommendation. Document this in your data protection impact assessment.

Building an audit trail that holds up

An audit trail is not just a compliance requirement -- it is your defence when someone challenges a decision, your learning tool for improving processes, and your evidence base for trustees and regulators.

For AI-assisted grantmaking, your audit trail should capture:

  • What the AI saw: the data inputs, prompts and criteria used.
  • What the AI recommended: the draft scores, flags or summaries produced.
  • What the human decided: the final decision, including any edits to AI outputs.
  • Why: the reasoning behind the decision, especially where it departs from the AI recommendation.
  • Who: the named individual responsible for the decision.

This level of documentation may seem onerous, but it is far less work than reconstructing a decision chain after a complaint. The TAG survey found that 55 per cent of foundations cite privacy and security concerns as their top barrier to AI adoption (TAG, 2024) -- robust audit trails directly address this concern by making AI use transparent and accountable.

Tools like Plinth build this audit trail automatically. When an assessor uses Plinth's AI assistant Pippin to generate draft assessment answers, the system logs the AI draft alongside the assessor's edits and final submission. Every AI-generated output is clearly marked as such, and the human assessor's changes are tracked. This means the audit trail exists by default, not as an afterthought.

How Plinth implements human-in-the-loop

Plinth's AI assistant, Pippin, is designed around the human-in-the-loop principle. Every AI feature in the platform treats AI output as a suggestion that a human reviews, edits and approves. Here is how this works across key features:

Assessment answers. Pippin analyses applications against your assessment criteria and generates structured draft answers with reasoning. Assessors see these drafts alongside the application, review each one, edit as needed, and apply the ones they agree with. A verbosity slider lets assessors control how detailed Pippin's suggestions are -- from one-sentence screening notes to comprehensive panel briefings. The human always makes the final call.

Due diligence document checks. Pippin automatically reviews uploaded governance documents, safeguarding policies and equality policies against established compliance standards. It flags specific issues -- an outdated DBS reference, a missing dissolution clause, a safeguarding policy without a named lead -- for a human to investigate. The system does not pass or fail documents; it surfaces concerns for human judgement.

Applicant feedback. After human assessors have scored an application, Pippin drafts personalised feedback based on the assessment scoring. The system prompt explicitly instructs the AI to base feedback primarily on the human reviewers' assessment scores, using the application content only for specific examples. A directness slider (1-10) lets the funder control tone. The human reviewer reads, edits and sends the final version.

Bulk assessor assignment. For high-volume funds, Plinth allows bulk assignment of applications to assessors -- either assigning all applications to chosen reviewers or randomly distributing them. This is workflow automation, not decision automation: it routes work to humans rather than replacing them.

Toggling AI features. Funders can enable or disable AI features per application form and per fund. They can disable AI for external assessors if they want fully independent human reviews. This granular control means organisations can match their AI usage to their risk appetite and governance policies.

Plinth has a free tier, making these human-in-the-loop AI capabilities accessible to smaller funders and foundations that might otherwise lack the resources to implement responsible AI workflows.

Training your team for human-in-the-loop workflows

Technology alone does not create good human-in-the-loop practice. Your team needs to understand their role in the system -- and specifically, the risks of two failure modes.

Automation complacency. When people work alongside AI, research consistently shows a tendency to over-trust machine outputs. If an AI system recommends rejection, assessors may unconsciously look for evidence to confirm that recommendation rather than challenging it. Training should emphasise that AI recommendations are starting points, not conclusions. The TAG survey found that only 4 per cent of foundations have enterprise-wide AI adoption (TAG, 2024), suggesting most teams are still developing the muscle memory needed for effective human-AI collaboration.

Automation aversion. Conversely, some team members may distrust AI entirely and refuse to engage with its outputs. This wastes the efficiency gains that AI offers and can create inconsistency if some assessors use AI assistance and others do not.

Effective training covers:

  • What the AI does and does not do: demystify the technology. Explain that it is pattern matching, not understanding.
  • When to override: clear criteria for when the human should depart from the AI recommendation, with worked examples.
  • How to document overrides: a simple process for recording why they disagreed with the AI, so the organisation learns from these instances.
  • Equalities awareness: how to spot patterns in AI recommendations that might indicate bias, and what to escalate.
  • Calibration sessions: regular meetings where assessors compare their responses to AI suggestions and align on standards.

A single half-day workshop at the start of each funding round, supplemented by quick refreshers as needed, is typically sufficient for teams new to AI-assisted grantmaking.

Building trust with applicants

Applicants increasingly expect funders to be transparent about how decisions are made. IVAR has noted that when AI is used in grantmaking, it can create concerns about fairness, authenticity and equity (IVAR, 2025). Proactive communication reduces these concerns.

Be transparent about AI use. Publish a clear, plain-English statement explaining how AI is used in your process. Specify what AI does (summarise, check eligibility, draft feedback) and what it does not do (make funding decisions). This should sit in your privacy notice and your applicant guidance.

Explain how humans stay in control. Applicants need to know that a person -- not an algorithm -- decides whether they receive funding. Name the stage at which human review occurs and the qualifications or experience of your assessors.

Provide clear challenge routes. Under UK GDPR, individuals have the right to obtain human intervention, express their point of view and contest the decision where automated processing is involved (ICO, 2025). Even where your process already includes meaningful human involvement, offering a clear appeals or feedback route demonstrates confidence in your process.

Support equitable access. If you offer AI-powered tools to help applicants complete applications -- such as auto-fill from documents or smart feedback on draft answers -- make them available to all applicants equally. Be mindful that AI tools can advantage well-resourced organisations if access is uneven.

Share what you learn. As your experience with AI-assisted grantmaking grows, share your findings with the sector. Nearly 90 per cent of foundations provide no AI implementation support to their grantees (CEP, 2025). Funders who share their approaches contribute to sector-wide learning.

Common pitfalls and how to avoid them

Even well-intentioned HITL implementations can go wrong. These are the most common failure patterns:

Rubber-stamping. The human reviewer exists on paper but approves every AI recommendation without genuine review. Solution: track override rates. If a reviewer never disagrees with the AI, that is a red flag, not a sign of good AI.

Inconsistent application. Some programmes use AI assistance while others do not, creating inconsistency across your portfolio. Solution: set organisation-wide standards for AI use, even if specific features are toggled on or off per fund.

Missing documentation. AI outputs are used but not logged, making it impossible to audit decisions later. Solution: use systems that log AI inputs, outputs and human edits automatically, rather than relying on manual record-keeping.

Over-reliance on confidence scores. AI systems often output confidence levels, but these can be misleading. A "high confidence" eligibility check still needs human verification for borderline cases. Solution: treat confidence scores as one signal among many, not as a substitute for review.

Ignoring feedback loops. If you never analyse where humans override the AI, you miss opportunities to improve both the AI system and your criteria. Solution: conduct quarterly reviews of AI performance, including override patterns and any equalities impacts.

Frequently asked questions

Is it legal to use AI in grant decision-making in the UK?

Yes, provided humans retain meaningful involvement in the decision. Article 22 of the UK GDPR restricts solely automated decisions with legal or similarly significant effects. Using AI for triage, summarisation and drafting is lawful as long as a qualified person reviews and makes the final funding decision.

What counts as "meaningful" human involvement?

The ICO's guidance indicates that meaningful human involvement means the person has genuine authority and competence to override the AI recommendation, access to the underlying data, and actually reviews the output before a decision takes effect. Rubber-stamping AI recommendations without review does not qualify.

Do we need to tell applicants we use AI?

Yes. Under UK GDPR transparency requirements, you must inform individuals about the logic involved in automated processing. Even where your process includes meaningful human involvement, best practice is to publish a clear statement about how AI is used and what role it plays.

Can AI introduce bias into grantmaking?

Yes. AI models can reproduce and amplify biases present in training data or in historical funding patterns. Regular equalities monitoring of AI-assisted decisions is essential. Track outcomes by applicant characteristics and compare them with your intended reach.

How do we get started with human-in-the-loop grantmaking?

Start with low-risk, high-volume tasks: eligibility screening, application summarisation and feedback drafting. These deliver clear efficiency gains without placing AI in the decision seat. Tools like Plinth offer these features with human-in-the-loop design built in, including a free tier for smaller funders.

What training do assessors need?

A half-day session covering what the AI does, when to override, how to document decisions and equalities awareness is typically sufficient. Follow up with calibration sessions at the start of each funding round.

Should we disable AI for external assessors?

This depends on your governance model. If you want fully independent external assessments, disabling AI suggestions for external reviewers ensures they form their own view before seeing any AI analysis. Plinth supports this with a per-fund toggle.

How do we audit AI-assisted decisions?

Log the AI inputs, recommendations and the human's final decision with reasoning. Review override rates, equalities impacts and decision consistency quarterly. Systems that build audit trails automatically -- rather than relying on manual record-keeping -- significantly reduce the compliance burden.

Recommended next pages


Last updated: February 2026