How to Give Better Feedback to Grant Applicants
Practical guide for funders on providing clear, constructive feedback to grant applicants — including those declined — to build trust and improve applications.
Most funders know they should give feedback. Few do it well, and many do not do it at all. With the average charity grant success rate in the UK sitting around 35.6% (down from 40% in 2020), the majority of applicants receive a decline. For major foundations like Esmee Fairbairn Foundation, the acceptance rate can be as low as 6%. That means the vast majority of the effort charities put into applications ends in a "no" — and what happens next shapes whether that effort was wasted or whether the applicant learns, improves, and comes back stronger.
The Foundation Practice Rating, which assesses 100 UK foundations annually on diversity, accountability, and transparency, found that only around a third of assessed foundations published any analysis of their own effectiveness. Few offered obvious complaint mechanisms, and some provided no email or phone number at all. Against that backdrop, the gap between what funders promise and what applicants experience is significant.
Feedback is not a courtesy. It is a core part of accountable grantmaking. It respects the time applicants invest, helps the sector improve, and — when done well — reduces the volume of poorly targeted applications that drain funder resources in the first place.
This guide covers what good applicant feedback looks like, how to structure it for different outcomes, and how to deliver it efficiently at scale without adding hours to every funding round.
What you will learn:
- Why feedback matters for funders, not just applicants
- The difference between eligibility fails and competitive declines
- How to structure feedback for different decision types
- What tone and directness level to use and when
- How AI can personalise feedback at scale
- How to build reusable feedback templates
Who this is for: Programme officers, grants managers, trust administrators, foundation directors, and panel chairs responsible for communicating funding decisions. Also relevant for charity infrastructure bodies working on funder practice standards.
Why Does Applicant Feedback Matter for Funders?
The case for feedback is often framed around applicant benefit, but the strongest argument is the funder's own interest. Good feedback reduces repeat enquiries, prevents poorly targeted reapplications, and builds the kind of trust that makes applicants more candid in future submissions.
IVAR's Open and Trusting initiative, which now has over 170 UK funders signed up, includes feedback as one of its eight core commitments. Participating funders commit to giving feedback, analysing and publishing success rates, and sharing reasons for rejection. The initiative grew from promises many funders made during the COVID-19 crisis and has become a benchmark for good practice across the sector.
GrantAdvisor UK, an anonymous review platform for grantseekers run by CAST, has collected hundreds of reviews from charities about their experiences with funders. An analysis of 500 reviews identified feedback — or the lack of it — as one of the most common themes. Grantseekers consistently praised funders who were responsive and transparent, and criticised those whose processes felt "faceless." Many reviews noted that a lack of feedback after rejection left organisations unable to improve.
The Foundation Practice Rating assesses 100 UK foundations annually on accountability, diversity, and transparency. In the 2024 assessment, only 11 foundations achieved an A grade overall, while 14 received a D. Feedback to unsuccessful applicants is a component of the accountability assessment, and foundations that handle it well — like County Durham Community Foundation, whose appeals process was highlighted as an example of excellent practice — score more highly.
Good feedback also protects funders. When applicants understand why they were declined, they are less likely to submit formal complaints, request panel reconsideration, or make freedom of information requests seeking decision rationale. Clarity at the point of decision prevents problems downstream.
What Is the Difference Between an Eligibility Fail and a Competitive Decline?
Not all rejections are the same, and your feedback should reflect that. The single most important distinction is between applications that failed on eligibility and those that were competitive but unsuccessful.
An eligibility fail means the applicant did not meet your published criteria. Perhaps the organisation was not a registered charity, or the project fell outside your geographic or thematic scope, or the request exceeded your grant range. Feedback here should be brief, factual, and specific. Point to the criterion that was not met. There is no need to assess the quality of the application itself, because it never reached that stage.
A competitive decline means the application was eligible but was not selected. This is where meaningful feedback adds the most value, because the applicant did everything right in terms of eligibility but needs to understand what would have made their application stronger relative to the field.
| Aspect | Eligibility Fail | Competitive Decline |
|---|---|---|
| Reason | Did not meet published criteria | Met criteria but not selected |
| Feedback depth | Brief, factual | Detailed, constructive |
| Reference point | Specific criterion not met | Scoring, panel assessment |
| Tone | Neutral, matter-of-fact | Supportive, encouraging |
| Suggested action | Check criteria before reapplying | Strengthen specific areas |
| Typical length | 2-3 sentences | 2-3 paragraphs |
| Reapplication guidance | Only if circumstances change | Encouraged with improvements |
Making this distinction explicit in your communications helps applicants understand where they stand. An organisation declined on eligibility does not need to rewrite their project plan. An organisation declined competitively does not need to question whether they qualify.
How Should You Structure Feedback for Different Outcomes?
The structure of your feedback should vary depending on the decision. A one-size-fits-all template is better than nothing, but tailored structures produce better results for both parties.
For approved applications, feedback should confirm what was strong about the application, set expectations for reporting and monitoring, and outline next steps. Even successful applicants benefit from knowing what particularly impressed the panel, because it helps them understand what funders value for future rounds. If there are conditions attached to the grant — such as match funding requirements or specific deliverables — the approval feedback is the right place to state them clearly.
For competitive declines, structure your feedback around three elements. First, acknowledge what was strong. Every competitive application has strengths, and naming them shows you read the submission carefully. Second, explain the gaps. Be specific: "The budget lacked a breakdown of staff costs" is actionable; "The budget needed more detail" is not. Third, provide direction. If the applicant could reapply in a future round, say so. If the project would be better suited to a different funder, point them there.
For eligibility fails, keep it short and direct. Name the criterion, explain why the application did not meet it, and offer guidance on where to check eligibility before applying. If your eligibility criteria are complex or frequently misunderstood, this is a signal to revisit how you write and publish your grant criteria.
For requests for clarification, where an application needs more information before a decision can be made, be precise about what you need and by when. Vague requests for "more detail" waste everyone's time. Specify the question, the format you want the answer in, and the deadline.
What Tone Should Applicant Feedback Take?
Tone is where most feedback goes wrong. Funders either default to corporate boilerplate that communicates nothing, or they overcorrect into language so gentle that the applicant cannot identify what to change.
The right tone depends on the decision and the context. For approvals, warmth and celebration are appropriate. For competitive declines, the tone should be honest and supportive — direct enough to be useful, respectful enough to maintain the relationship. For eligibility fails, a neutral and factual tone works best.
One useful framework is to think in terms of directness levels. At one end, very gentle feedback focuses on positives and frames concerns as opportunities. At the other end, very direct feedback provides frank assessment of shortcomings and may discourage reapplication where the fit is genuinely poor. Most competitive declines sit in the middle: honest but supportive, naming specific issues while encouraging future applications.
There are a few common mistakes to avoid. First, do not use the phrase "unfortunately, on this occasion" without any follow-up explanation. It is the most recognised non-answer in the sector and signals that no real feedback is being given. Second, do not hedge so much that the applicant cannot tell whether the issue was minor or fundamental. Third, do not promise reconsideration unless you mean it.
Charities invest significant time in each grant application. According to research on the sector, a charity managing five concurrent grants may spend weeks of staff time annually on funder reporting alone. Respecting that investment with honest, clear communication is not just good practice — it is basic professional courtesy. As IVAR's Open and Trusting guidance notes, charities need funders to be honest in their feedback, even when the news is unwelcome.
How Can You Deliver Personalised Feedback at Scale?
The biggest practical barrier to good feedback is volume. A funder receiving hundreds of applications per round cannot write bespoke letters to every applicant. But generic feedback — the same three sentences sent to everyone — is barely better than no feedback at all.
The solution is a layered approach that combines templates, scoring data, and technology.
Start with structured templates. Create standard feedback frameworks for each decision type — approval, competitive decline, eligibility fail, and clarification. These provide the skeleton. The key is to design them with merge points where specific information can be inserted: the applicant's name, their project title, the specific criteria they did or did not meet, and particular strengths or weaknesses from the assessment.
Use scoring data to personalise. If your assessment process involves scoring against criteria — and most do — the scores themselves contain the raw material for feedback. An application that scored highly on need but poorly on deliverability needs different feedback from one that scored well overall but was outcompeted. Connecting your scoring to your feedback process is the single most impactful thing you can do.
Consider AI-assisted drafting. AI tools can generate personalised feedback by combining the application content, the assessment scores, and your feedback templates. The AI drafts a response; a human reviewer edits and approves it. This approach maintains the personalisation that applicants value while reducing the time commitment from hours to minutes.
Tools like Plinth take this approach. Plinth's AI feedback feature pulls from the application content and scoring data provided by human assessors, generates a tailored draft, and lets the reviewer adjust the directness level and edit the text before sending. The AI bases its feedback primarily on the human assessment scores rather than making independent judgements, keeping the reviewer firmly in control of the decision rationale. Plinth also supports configurable rejection templates — clickable presets that instruct the AI on what to emphasise in each rejection letter, such as "budget concerns" or "outside geographic scope" — so that common decline reasons can be communicated consistently without drafting from scratch each time.
What Should You Include in Rejection Feedback?
Rejection feedback is where the stakes are highest and the execution is most often poor. The goal is not to soften the blow but to help the applicant understand the decision and, where appropriate, improve.
Every rejection message should include four elements:
- A clear statement of the outcome. Do not bury the decision in the third paragraph. Lead with it.
- The reason for the decision. For eligibility fails, name the criterion. For competitive declines, summarise the assessment. Be specific enough that the applicant can act on it.
- What was strong. Even in a rejection, acknowledging strengths shows that the application was read and considered. This is not about cushioning — it is about accuracy.
- Next steps or alternatives. Can they reapply? Should they consider a different programme? Is there another funder whose priorities are a better match? Point them somewhere useful.
What you should not include is equally important. Avoid sharing verbatim panel comments, which may contain informal language or shorthand that reads differently out of context. Provide a concise, edited summary of the panel's reasoning instead. Avoid language that implies the decision was close if it was not — false encouragement wastes the applicant's time and your own when they reapply with minimal changes.
Research on how charities experience the application process consistently shows that applicants value honesty over comfort. A clear "your budget was not realistic for the outcomes described" is more useful than "we had a very strong field this year." The first gives the applicant something to fix. The second gives them nothing.
How Do Feedback Templates and Rejection Reasons Work in Practice?
Templates are essential for consistency and efficiency, but they need to be designed carefully to avoid sounding generic. The best feedback templates are frameworks, not scripts.
A well-designed template system has three layers:
General guidance sets the overall tone and structure for your feedback. This includes standard opening and closing language, signposting to appeals processes or alternative funds, and any organisational messaging you want to include consistently. This layer changes infrequently — perhaps once per year.
Decision-specific templates provide the structure for each outcome type. Your approval template might include sections for strengths, conditions, and next steps. Your rejection template might include sections for the decision, the reason, the strengths, and alternatives. These templates ensure nothing important is missed, regardless of who drafts the response.
Rejection reason presets address the most common reasons for decline. If 30% of your rejections are due to budget issues and another 20% are geographic scope, having pre-built instructions for these scenarios means you are not reinventing the wheel for each application. In Plinth, these presets appear as clickable buttons — a reviewer selects "Budget concerns" or "Outside geographic scope," and the AI generates feedback following those specific instructions while incorporating details from the individual application.
The advantage of this layered approach is that it scales. A programme officer assessing 50 applications can move through the feedback stage efficiently while still producing communications that feel specific and considered. The template handles the structure; the scoring data and application content handle the personalisation.
Over time, your templates also become a source of insight. If you notice that the same rejection reasons come up repeatedly, that is a signal to review your eligibility criteria, your application guidance, or both. Tracking rejection themes across rounds helps you identify where applicants are consistently misunderstanding what you fund, which is an opportunity to improve your communications upstream rather than sending the same rejection feedback downstream.
When Should You Send Feedback, and Through What Channel?
Timing matters more than most funders realise. Feedback sent six months after the decision has far less value than feedback sent within two weeks. The application is still fresh in the applicant's mind, they may still have time to redirect their project to another funder, and the promptness itself communicates respect.
IVAR's Open and Trusting commitments include working at a pace that meets the needs of applicants, publishing and sticking to timetables, and making decisions as quickly as possible. Feedback is part of that timetable. If you tell applicants they will hear within eight weeks, the feedback should arrive within eight weeks — not the decision alone.
The channel should match the decision. Email is appropriate for most outcomes and creates a written record that applicants can refer back to. For significant grants that were declined competitively, a phone call followed by written feedback can be more appropriate, particularly if the applicant has an existing relationship with the funder. Avoid communicating decisions solely through online portals that require login — while portals are useful for record-keeping, the notification itself should arrive in the applicant's inbox.
For approval feedback, consider combining the decision communication with the practical next steps. If you use grant agreements managed through a platform, the feedback can be sent alongside the agreement link, keeping the process streamlined. Plinth supports this by allowing funders to send approval feedback together with a digital grant agreement, or to send it separately via email depending on their workflow.
Some funders also offer a feedback conversation — an optional follow-up call where the applicant can ask questions about the decision. This is particularly valuable for competitive declines of strong applications, where the organisation is likely to reapply. Even a 15-minute call can clarify points that would generate multiple email exchanges and demonstrates a level of engagement that builds transparency into decisions.
How Can You Use Feedback Data to Improve Your Own Grantmaking?
Feedback is typically framed as something funders give to applicants, but the most effective feedback loops run in both directions. The data generated by your feedback process — the patterns in rejection reasons, the questions applicants ask in response, the themes that emerge across rounds — is a rich source of insight for improving your own practice.
Start by tracking rejection reasons systematically. If you categorise every decline — eligibility, budget, deliverability, geographic scope, strategic fit, evidence of need — you build a dataset that reveals where your application process is failing to filter early enough or where your published guidance is unclear. If 25% of applications are declined for geographic scope, your eligibility screening or pre-application guidance needs work.
GrantAdvisor UK provides another feedback channel. The platform allows grantseekers to leave anonymous reviews of funders, and funders are notified and encouraged to respond. The tangible changes that have resulted include City Bridge Foundation improving their learning visits programme, Joffe Charitable Trust reducing their reporting requirements, and Steve Morgan Foundation using anonymous feedback to evaluate their unrestricted funding trial. Engaging with grantseeker feedback — whether through GrantAdvisor or your own post-decision surveys — helps you see your process from the other side.
The Foundation Practice Rating also provides a useful external benchmark. By participating in or reviewing the FPR criteria, funders can identify where their accountability practices — including feedback — compare with peers. In the 2024 assessment of 100 foundations, 41 received a B grade overall, suggesting broad room for improvement across the sector. Understanding where you sit relative to that benchmark helps prioritise improvements.
Finally, share what you learn. Publishing your success rates, your most common rejection reasons, and the themes from your feedback process is one of the IVAR Open and Trusting commitments. It also helps the wider sector. If applicants can see before they apply that 60% of your rejections are budget-related, they will submit stronger budgets. Transparency upstream reduces burden downstream — a principle explored in more detail in our guide to reducing the burden on grant applicants.
FAQs
Are funders legally required to give feedback to unsuccessful grant applicants?
There is no legal requirement in England and Wales for charitable foundations to provide feedback to unsuccessful applicants. However, it is considered best practice by sector bodies including ACF, IVAR, and the Foundation Practice Rating. Over 170 funders have committed to providing feedback through IVAR's Open and Trusting initiative, and the Foundation Practice Rating includes feedback practices in its accountability assessment of 100 UK foundations.
How long should rejection feedback be?
For eligibility fails, two to three sentences are sufficient — name the criterion, explain the gap, and point to guidance. For competitive declines, two to three short paragraphs are appropriate: a clear statement of the decision, specific reasons drawn from the assessment, acknowledgement of strengths, and guidance on next steps. Anything longer than a page risks being unread.
Should funders share panel comments with applicants?
Not verbatim. Panel discussions often include informal language, shorthand, or speculative comments that read differently in writing. Instead, provide a concise, edited summary of the panel's reasoning. Focus on the criteria-based assessment and the specific factors that influenced the decision, rather than individual panellist opinions.
Can AI-generated feedback sound impersonal?
It can if used carelessly. The key is to use AI as a drafting tool, not a sending tool. AI generates a starting point from the application content and scoring data; a human reviewer then edits the draft to add nuance, adjust tone, and ensure accuracy. In Plinth, reviewers can adjust a directness slider from "very gentle" to "very critical" and edit the generated text before saving or sending. The result is feedback that is both personalised and efficient.
How quickly should feedback be sent after a decision?
Within two weeks of the panel decision is ideal. Feedback sent months later has significantly less value — the applicant may have already moved on, and the delayed response signals that the funder does not prioritise communication. If your assessment process means decisions take time, set clear expectations in your application guidance about when applicants will hear.
Should funders encourage unsuccessful applicants to reapply?
Only if you genuinely mean it. Encouraging reapplication from an applicant whose project is a poor strategic fit wastes their time and increases your workload. For competitive declines where the application was strong but the field was stronger, encouragement is appropriate and should be specific: "We would welcome a reapplication that addresses the budget concerns we have outlined." For eligibility fails, reapplication should only be encouraged if the eligibility issue is resolvable.
How do you handle feedback when a funder receives hundreds of applications?
Volume is the most common reason funders cite for not giving feedback, but it is a workflow problem rather than an inherent impossibility. Structured templates, scoring-linked feedback, and AI drafting tools together reduce the per-application time from 20-30 minutes to 5-10 minutes. At 200 applications per round, that is the difference between 100 hours of work and 25 hours. Tools like Plinth are designed specifically for this: generating personalised feedback from scoring data and application content, with configurable templates for common rejection reasons.
What is the best way to handle applicants who disagree with the feedback?
Have a clear, published process. Acknowledge the response, explain that panel decisions are final (if they are), and offer a brief follow-up conversation if appropriate. Do not re-open the assessment process unless there is evidence of a procedural error. Most disagreements arise from unclear feedback rather than genuine disputes about the decision — which is itself an argument for giving better feedback in the first place.
Recommended Next Pages
- How to Build Transparency into Grant Decisions — Communicating choices clearly with published criteria, feedback standards, and audit trails.
- How Charities Experience the Application Process — Insights from the applicant side on what helps and what creates friction.
- How to Write Clear Grant Criteria — Making eligibility and assessment criteria easy to understand before applicants begin.
- Reducing the Burden on Grant Applicants — How to streamline applications and reporting while maintaining accountability.
- Why Feedback Builds Better Funders — The broader case for feedback as a tool for funder learning and sector improvement.
Last updated: February 2026