Ten Quick Wins for Smarter Grantmaking

Ten practical, low-cost changes funders can implement this quarter to cut admin, improve fairness, and strengthen applicant experience in their grants programme.

By Plinth Team

Most grantmaking programmes do not need a multi-year transformation project to become meaningfully better. They need a handful of targeted, practical changes that can be implemented in weeks rather than months.

The UK charity sector spends an estimated 900 million pounds a year on the costs of applying for grants (Pro Bono Economics, 2022). Small charities with income under 100,000 pounds spend 38 per cent of their total grant income just on the application process. Meanwhile, the Directory of Social Change has found that between a third and a fifth of all grant applications are ineligible, representing an enormous amount of wasted effort on both sides of the process.

These figures point to a system that is ripe for improvement. The good news is that many of the highest-impact changes are neither expensive nor technically difficult. They require clarity of thought, a willingness to look at your process from the applicant's perspective, and the discipline to act on what you find.

This guide sets out ten specific, achievable changes that funders of any size can implement within a single quarter. Each one is grounded in established sector practice from organisations such as the Association of Charitable Foundations (ACF), the Institute for Voluntary Action Research (IVAR), and the Foundation Practice Rating. Together, they can meaningfully reduce administrative burden, improve decision quality, and strengthen the relationship between funders and the organisations they support.

1. Add an eligibility screening step before the full application

The single highest-impact change most funders can make is to prevent ineligible applications from entering the system in the first place. When a third or more of applications are ineligible (Directory of Social Change), every one of those applications represents wasted hours for both the applicant who prepared it and the programme officer who assessed it.

An eligibility screening step is a short set of qualifying questions, typically five to ten, that applicants complete before gaining access to the full application form. The questions should map directly to your published criteria: organisation type, geographic focus, income threshold, charitable status, and the broad area of work you fund. If an applicant does not meet these requirements, they are told immediately and can redirect their time elsewhere.

This is not about creating barriers. It is about respecting applicants' time. IVAR's Open and Trusting Grant-making framework specifically calls for funders to be "clear about priorities" and "proportionate in requirements" so that charities do not waste resources on applications they cannot win.

ApproachApplicant time invested before rejectionFunder processing time
No screening10-40 hours (full application)2-4 hours per ineligible application
Basic eligibility page on website5-30 minutes reading, but full application still submitted2-4 hours per ineligible application
Automated eligibility screening5-10 minutes on screening formNear zero for ineligible applicants
Two-stage process with screening5-10 minutes screening, 1-2 hours expression of interest15-30 minutes per screened-out applicant

Tools like Plinth take this further by offering configurable eligibility rules, including AI-assisted evaluation, that automatically route applicants to the right fund or provide an immediate eligibility decision through a funding portal. This means applicants know within minutes whether they are a good fit, without any manual processing by your team.

2. Audit your application form against your assessment criteria

Many application forms have grown over time. Questions get added to address one-off concerns, fields accumulate from previous programme iterations, and sections remain because nobody has actively decided to remove them. The result is often a form that asks for information nobody uses in the actual assessment.

The fix is straightforward: print your assessment criteria alongside your application form and draw a line from each question to the criterion it informs. Any question that does not connect to a criterion is a candidate for removal. Any criterion that lacks a corresponding question is a gap in your form.

According to Pro Bono Economics research (2022), the average charity fundraiser spends roughly two full working days on a single grant application. Every unnecessary question adds to that burden, and the cumulative effect across hundreds or thousands of applicants is substantial. Cutting even three or four redundant questions can save applicants hours and make your own assessment process faster and more focused.

This exercise often reveals that forms ask for the same information in multiple ways, request detailed financial projections for modest grants, or demand multi-page narratives when a few paragraphs would suffice. The ACF's Stronger Foundations initiative encourages funders to ensure that information requirements are always proportionate to the scale of funding.

3. Provide clear guidance with examples of strong answers

Unclear application guidance is a hidden driver of poor-quality submissions. When applicants are unsure what a funder is looking for, they either over-write (producing verbose, unfocused answers) or under-write (leaving out critical information the assessor needs). Both outcomes slow down the assessment process and reduce decision quality.

Publishing worked examples of strong answers, anonymised and drawn from previous rounds, gives applicants a concrete benchmark. This is especially valuable for smaller charities that may not have a dedicated fundraiser or previous experience with your programme. The Foundation Practice Rating's 2024 assessment of 100 UK foundations found that only 63 scored an A for transparency, suggesting significant room for improvement in how funders communicate expectations.

Effective guidance should include the word limit for each section, the weighting each section carries in scoring, one or two anonymised examples showing what a strong response looks like, and a note on common mistakes to avoid. This does not require a large document. A single page of guidance per section, published alongside your form, can materially improve submission quality and reduce the number of clarification queries your team handles.

4. Run a reviewer calibration session before scoring begins

Scoring inconsistency is one of the most persistent challenges in grantmaking. Research published in Research Integrity and Peer Review (2023) confirms that score calibration discussions significantly influence both within-panel and between-panel variability. Without calibration, two reviewers reading the same application can reach substantially different conclusions, not because they disagree on its merits, but because they interpret the scoring criteria differently.

A calibration session takes 60 to 90 minutes. The format is simple: all assessors independently score two or three applications drawn from the current round, then the panel meets to compare scores, discuss divergences, and agree on how the criteria should be applied. This surfaces differences in interpretation before they affect real decisions.

UKRI's guidance on tackling bias in peer review recommends that panels base decisions on evidence rather than impressions, and that chairs play an active role in limiting the influence of extreme reviewers. A pre-scoring calibration session addresses both of these concerns directly. It also helps new assessors, including external reviewers or board members who assess infrequently, to score with the same rigour as experienced colleagues.

Plinth supports this by allowing funders to assign assessors to applications, configure scoring frameworks, and view score distributions across the panel, making it straightforward to identify and address calibration issues before final decisions are made.

5. Introduce save-and-return functionality for applicants

Requiring applicants to complete a form in a single session is a surprisingly common barrier. Many funders still use PDF forms or basic web forms that do not retain progress, forcing applicants to gather all information, draft all answers, and submit in one sitting. For a small charity where the chief executive doubles as the fundraiser, this can mean working evenings and weekends to meet a deadline.

Save-and-return functionality allows applicants to start a form, leave it, and return later with their progress intact. This is standard practice in government digital services and mainstream consumer applications, but remains absent from many grantmaking processes.

The benefit is not just convenience. When applicants can work on a form over several days, they produce better answers. They can consult colleagues, check figures, and refine their narrative. The result is higher-quality submissions that are easier and faster for assessors to evaluate. Given that UK charitable foundations distributed a record 8.24 billion pounds in 2023-24 (ACF, Foundations in Focus 2025), the quality of applications used to allocate those funds matters enormously.

Plinth's application forms include automatic save-and-return as standard, with applicants able to resume their submission from any device without losing progress.

6. Map every question to a specific assessment criterion

This is the assessment-side counterpart to the form audit described in section two. Even where a form has been trimmed, the scoring process itself can be inefficient if assessors are not clear about which questions map to which criteria.

A question-to-criterion mapping document, shared with all assessors before the round opens, serves several purposes. It ensures that every assessor evaluates the same information against the same standard. It prevents double-counting, where two criteria effectively score the same answer. And it makes scoring faster, because assessors know exactly where to look for the information they need.

This mapping also reveals structural problems in your assessment framework. If a single question maps to three criteria, the question is probably doing too much work and should be split. If a criterion cannot be mapped to any question, either the criterion is being assessed subjectively (a fairness risk) or the application form has a gap.

For a detailed treatment of assessment design, see our guide on how to write clear grant criteria.

7. Log conflicts of interest systematically

Conflict of interest management is a governance requirement, but in practice many funders handle it informally. A panel member mentions a connection at the start of a meeting, steps out of the room, and the process continues. The problem is that informal approaches leave no audit trail, create inconsistency between meetings, and may not capture conflicts that are not immediately obvious to the individual concerned.

A systematic approach requires three elements: a standing register where all panel members declare their interests at the start of each funding round, a documented process for handling conflicts when they arise during assessment, and a record of which declarations were made and what action was taken for each affected application.

The Charity Commission's guidance on conflicts of interest for trustees is clear that managing conflicts is not optional, and the principles apply equally to grant assessment panels. The Foundation Practice Rating's 2024 results showed that only 17 of 100 assessed foundations scored an A for accountability, with conflict management being one of the assessed areas.

This does not require complex technology. A shared spreadsheet with dated entries, linked to individual applications, is sufficient for most funders. However, dedicated grant management platforms like Plinth can integrate conflict logging directly into the assessment workflow, creating an automatic audit trail that is available for governance reviews.

8. Use AI to draft applicant feedback

Providing feedback to unsuccessful applicants is one of the most valuable things a funder can do, and one of the most commonly skipped. The reason is almost always capacity: writing personalised feedback for dozens or hundreds of declined applications takes time that programme teams simply do not have. The result is that most applicants hear nothing beyond a standard rejection email.

This matters. IVAR's work on open and trusting grant-making emphasises that transparency about decisions builds trust between funders and applicants, even when the decision is negative. The Foundation Practice Rating found that transparency was the strongest area for UK foundations, with 63 of 100 scoring an A, but accountability, which includes how decisions are communicated, lagged significantly behind.

AI can bridge this gap. Rather than writing feedback from scratch, programme officers can use AI to generate a first draft based on the assessor's scores and notes, then review and edit before sending. This reduces the time per feedback letter from 20-30 minutes to 5-10 minutes while maintaining quality and personalisation.

Plinth includes AI-assisted feedback drafting that draws on the actual assessment data for each application, generating specific, constructive feedback that programme officers can review and send with confidence. For more on this topic, see our guide on how to give better feedback to applicants.

9. Create proportionate reporting templates

Grant reporting requirements are a frequent source of friction between funders and grantees. Too often, a charity receiving a 5,000-pound grant is asked to complete the same reporting form as one receiving 500,000 pounds. The administrative cost of producing a detailed report can consume a significant share of the grant itself, undermining the very impact the funding was intended to create.

Proportionate reporting means scaling your requirements to the size and complexity of the grant. A tiered approach is the simplest way to achieve this:

Grant sizeReporting approachTypical requirements
Under 5,000 poundsLight touchBrief narrative update (250-500 words), confirmation of spend
5,000-25,000 poundsStandardShort report against agreed outcomes, basic financial summary
25,000-100,000 poundsDetailedFull narrative and financial report, evidence of outcomes
Over 100,000 poundsComprehensiveDetailed report, independent evaluation where appropriate, financial audit

IVAR's Better Reporting principles explicitly call for reporting requirements that are "well understood, proportionate and meaningful." ACF's Stronger Foundations initiative reinforces this, noting that more foundations are now offering funding specifically for organisational and administrative expenses, recognising the hidden cost that disproportionate reporting imposes.

For practical approaches to reducing monitoring burden, see our guide on how to collect impact data without overburdening charities.

10. Publish your turnaround time and track it

Applicants consistently report that uncertainty about timelines is one of the most frustrating aspects of the grants process. When a charity submits an application and hears nothing for months, it cannot plan. It may turn down other opportunities, delay hiring, or hold back on programme delivery while waiting for a decision.

Many UK funders aim for a 10 to 12-week turnaround from submission to decision, though actual timelines vary widely. The Robertson Trust, for example, targets 10 to 12 weeks. Joseph Rowntree Reform Trust aims for two weeks on grants under 10,000 pounds. But many funders do not publish any timeline at all, leaving applicants to guess.

Publishing a target turnaround time is a simple, zero-cost action that immediately improves applicant experience. Tracking actual performance against that target is what turns it into a driver of process improvement. If your average turnaround drifts from 10 weeks to 14, that is a signal to investigate bottlenecks, whether in assessment, panel scheduling, or approval workflows.

Plinth's grant management dashboard tracks application volumes, processing times, and decision timelines in real time, giving programme managers visibility over bottlenecks and enabling data-driven improvements to turnaround times.

How to prioritise: where to start this quarter

Not all ten changes need to happen at once. The table below suggests a phased approach based on effort and impact:

PhaseActionsTypical timeframe
This monthPublish turnaround time, audit form against criteria, add guidance with examples1-2 weeks
This quarterAdd eligibility screening, introduce save-and-return, create conflict of interest register4-8 weeks
Next quarterRun calibration session, implement question-to-criterion mapping, build reporting templates, deploy AI feedback drafting8-12 weeks

The first phase requires no technology and no budget. It requires only a decision to act. The second phase may involve updating your application infrastructure or adopting a grants management platform. The third phase builds on the earlier changes and introduces practices that benefit from some organisational learning.

For a broader view of modernising your grants programme, see our guide on how to run a modern grants programme.

Measuring the impact of your changes

Implementing changes without measuring their effect risks both losing momentum and missing problems. Five metrics are sufficient to track whether your quick wins are delivering results:

Ineligible application rate. The percentage of applications that fail basic eligibility checks. If you introduce eligibility screening (win number one), this should drop sharply. A baseline of 20-33 per cent (per DSC research) falling to under 5 per cent is a realistic target.

Average applicant time. How long applicants spend completing your form. This is harder to measure directly, but applicant surveys or form analytics can provide proxy data. A reduction of 25-30 per cent following a form audit is common.

Assessor agreement rate. The degree of consistency between reviewers scoring the same application. Post-calibration, inter-rater agreement should increase measurably.

Turnaround time. Days from submission to decision notification. Track the median rather than the mean, as outliers can skew averages.

Applicant satisfaction. A short post-decision survey, even three or four questions, provides direct feedback on whether your changes are improving the experience. IVAR recommends annual applicant experience surveys as standard practice.

Plinth provides dashboards that track several of these metrics automatically, including application volumes, processing times, and rejection analytics with AI-generated insights into common rejection patterns.

FAQs

How quickly can we see results from these changes?

Most funders see measurable improvements within a single funding round. Eligibility screening reduces ineligible applications immediately. Form improvements and published guidance typically improve submission quality from the next round onwards. Calibration sessions improve scoring consistency from the first use.

Do we need new software to implement these quick wins?

No. Several of the ten changes, including the form audit, guidance documents, calibration sessions, conflict registers, published timelines, and reporting templates, require no technology at all. However, a purpose-built grants management platform like Plinth, which offers a free tier, makes it substantially easier to implement and sustain changes like eligibility screening, assessor management, and performance tracking.

How do we get board or trustee buy-in for process changes?

Frame changes in terms of governance improvement and risk reduction, not just efficiency. Systematic conflict logging, for example, directly addresses Charity Commission expectations. Published turnaround times demonstrate accountability. Reduced ineligible application rates show that your published criteria are working. Board members typically respond well to concrete metrics showing improvement.

Will eligibility screening put off good applicants?

Well-designed eligibility screening does the opposite. It protects good applicants from wasting time on funds they cannot access and directs them toward those they can. The key is clear, factual questions rather than subjective ones, and transparent communication about what the screening is for.

Can AI-drafted feedback replace human judgement in grant decisions?

No, and it should not. AI feedback drafting is a productivity tool, not a decision-making tool. The AI generates a first draft based on human assessors' scores and notes. A programme officer always reviews, edits, and approves the feedback before it is sent. The human remains in control of the decision and the communication.

How do we handle the transition if we currently have no eligibility screening?

Start with the next funding round rather than retrofitting the current one. Design your screening questions based on the criteria that generate the most ineligible applications (your rejection data will tell you which these are). Communicate the change clearly on your website and in your communications. Most applicants welcome the change because it saves them time.

What if our application form is mandated by our board or a statutory requirement?

Even mandated forms can usually be improved. Distinguish between what is genuinely required and what has become embedded through habit. If certain questions are non-negotiable, focus on the other nine wins. Guidance with examples, calibration sessions, conflict logging, and proportionate reporting are all independent of your application form design.

How does proportionate reporting work when we need to aggregate data across different grant sizes?

Design your reporting tiers so that the core outcome measures are consistent across all levels. Smaller grants report against the same outcomes but with less detail and fewer supporting documents. This preserves your ability to aggregate while dramatically reducing the burden on smaller grantees. Tools like Plinth allow funders to configure different monitoring forms for different grant tiers while still rolling up to a unified dashboard.

Recommended next pages


Last updated: February 2026