Why Your Grant Applications Keep Getting Rejected

The real reasons charity grant applications fail and how to fix them. Evidence gaps, weak budgets, and funder misalignment — with practical UK solutions.

By Plinth Team

Most charity grant applications are rejected. That is not pessimism — it is arithmetic. The National Lottery Community Fund's published data shows that competition for grants is intense. Many trust and foundation schemes are highly competitive: the Esmee Fairbairn Foundation's 2024 overview showed that only 7% of expressions of interest were invited to submit a full proposal, ultimately awarding 241 grants totalling £45.1 million. If your charity sends 20 applications and hears back positively from three, you are performing about average.

But "average" is not inevitable. The same charities appear on funded lists repeatedly, while others submit application after application without success. The difference is rarely about the quality of the work — it is about how that work is presented. Specifically, it is about evidence. Funders consistently cite insufficient evidence of impact as one of their primary reasons for rejection. Not bad projects. Not bad writing. Weak evidence.

This guide examines the most common reasons grant applications fail and shows you how to fix each one. The thread running through every section is the same: if you are collecting data properly — tracking outcomes, recording case studies, measuring what changes — your applications are automatically stronger because you can cite real numbers, real stories, and real results. Tools like Plinth make that data collection straightforward, which makes grant writing dramatically easier.

What you will learn

  • The six most common reasons grant applications get rejected
  • How to diagnose which mistakes your charity is making
  • Practical steps to strengthen every section of your next application

Who this is for

  • Charity fundraisers and managers frustrated by repeated rejections
  • Small charities submitting their first grant applications
  • Anyone who suspects their applications are good enough but cannot work out why they keep failing

Are You Applying to the Wrong Funders?

The single most common reason for rejection is also the simplest: the application does not match what the funder is looking for. This sounds obvious, but the scale of the problem suggests otherwise. A significant proportion of grant applications are rejected at the initial sift stage because they do not meet the funder's basic eligibility criteria or strategic priorities.

This happens for several reasons. Charities apply to every funder they find, hoping that volume will compensate for fit. They read eligibility criteria too loosely — "community development" sounds like it could cover their after-school club, so they apply. Or they bend their project description to match the funder's language, creating a disconnect between what they actually do and what the application promises.

The fix is disciplined research before you write a word. Read the funder's strategy document, not just their eligibility page. Look at who they have funded recently — most publish grant lists — and check whether organisations like yours appear. If the funder's recent awards are all to housing associations and you run a youth music project, that is a signal regardless of what the eligibility criteria technically say.

Charities that conduct structured funder research before applying consistently report significantly higher success rates than those that apply broadly. The time spent researching is almost always better invested than the time spent writing an application to the wrong funder.

Is Your Evidence of Impact Strong Enough?

This is the heart of the problem. Funders are not short of good projects to fund — they are short of projects that can prove they work. The difference between a funded application and a rejected one frequently comes down to a single question: can this charity demonstrate that its approach actually delivers the outcomes it claims?

Weak evidence looks like this: "Our participants report feeling more confident." "We believe our programme reduces isolation." "Feedback from service users has been overwhelmingly positive." These statements might all be true, but they give a funder nothing to assess. How many participants? How is confidence measured? What does "overwhelmingly positive" mean in numerical terms?

Strong evidence looks like this: "Of the 87 participants who completed our 12-week programme, 74 (85%) showed a measurable improvement in confidence as assessed by the Warwick-Edinburgh Mental Wellbeing Scale. Average scores increased from 42 to 51, representing a clinically significant change." That is the same claim, backed by data.

Research by New Philanthropy Capital shows that charities with established outcome measurement frameworks are significantly more likely to secure repeat funding than those without. The evidence does the work.

The practical problem is that most small charities are not set up to collect this kind of data systematically. They rely on end-of-project surveys, anecdotal feedback, and case studies written from memory weeks after the event. This is where tools matter. Plinth's survey tools and case note features are designed to capture outcome data as part of everyday service delivery — not as a separate reporting exercise that staff do when a funder deadline approaches.

Evidence ApproachFunder Confidence LevelEffort RequiredExample
Anecdotal ("participants said they enjoyed it")LowLow"People told us they liked the sessions"
Output data only (numbers served)ModerateLow–Medium"We supported 142 beneficiaries"
Pre/post outcome measurementHighMedium"Self-reported wellbeing improved by 21%"
Validated tools with comparison dataVery HighMedium–High"Scores improved by 1.2 SD vs. control group"
Longitudinal tracking with case studiesVery HighHigh (or low with the right tools)"85% sustained improvements at 6-month follow-up"

The bottom two rows look expensive and time-consuming. With the right digital infrastructure, they do not have to be. Charities using Plinth to track outcomes longitudinally are generating this level of evidence as a by-product of normal operations, not as a special project.

Are Your Outcomes Vague or Unmeasurable?

Related to evidence but distinct from it: many applications are rejected because the outcomes they propose cannot meaningfully be measured. Funders need to know what will change, for whom, by how much, and how you will know. Applications that promise to "raise awareness," "build community," or "improve wellbeing" without specifying what those terms mean in measurable terms are flagging themselves for rejection.

The Charity Commission's 2025 guidance on outcomes-based reporting emphasised that good outcomes are specific, measurable, and time-bound. "Improve digital literacy among over-65s" is not an outcome — it is an aspiration. "Increase the proportion of over-65s in our programme who can independently use email from 30% to 70% within six months" is an outcome.

The challenge for many charities is not just collecting data, but ensuring that learning and evaluation genuinely shape decision-making — a gap that remains persistent across the sector.

The solution is to define your outcomes before you design your programme, not after. Build measurement into your project from the start. Use validated survey tools where possible — the Warwick-Edinburgh Mental Wellbeing Scale, the Outcomes Star, the Rickter Scale — because funders trust established instruments more than bespoke questionnaires.

Platforms like Plinth make this easier by embedding outcome measurement into everyday workflows. Instead of running a special evaluation exercise at the end of a programme, you collect data at each interaction — a quick survey after each session, a check-in at regular intervals, an automated follow-up at three and six months. By the time you write your next grant application, you have a dataset of real outcomes rather than a collection of hunches.

Is Your Budget Realistic and Justified?

Budget problems sink more applications than most fundraisers realise. Unrealistic or poorly justified budgets are one of the most common reasons for rejection — typically the third most common reason after poor funder fit and weak evidence.

The most common budget mistakes fall into predictable patterns. The budget is too round — every line item is a neat multiple of £1,000, suggesting the figures were estimated rather than calculated. Full cost recovery is absent — the budget covers direct project costs but ignores the proportion of rent, utilities, management time, and insurance that the project will consume. Or the budget is simply too ambitious — the charity is asking for £80,000 to achieve outcomes that similar programmes deliver for £30,000.

Funders are experienced at reading budgets. They have seen thousands and they know what things cost. A youth worker salary listed at £22,000 when the going rate is £28,000 raises questions about whether you will actually be able to recruit. A training budget of £500 for 100 participants suggests sessions that are too cheap to be effective. These details matter.

The fix is straightforward but requires discipline. Cost every line item from actual quotes, salary benchmarks, and supplier rates. Include full cost recovery — the Charity Finance Group's full cost recovery toolkit provides a standard methodology. And explain your calculations. A line item that says "Venue hire: £3,600 (£150/session x 24 sessions)" is far more convincing than "Venue hire: £4,000."

If your charity uses Plinth to manage programme delivery, your actual cost data — staff time per session, materials used, venue costs — is already recorded. Pulling real cost data into your budget means your figures are defensible because they are based on what you actually spend, not what you think sounds reasonable.

Does Your Application Show Organisational Credibility?

Funders are not just funding projects — they are funding organisations. A brilliant project proposal from a charity with no governance, no financial controls, and no track record is a high-risk investment. Many funders conduct some form of due diligence on the applicant organisation before assessing the project itself.

Small charities often underestimate how much this matters. They focus all their energy on the project description and leave the "about your organisation" section as an afterthought — a paragraph of generic text copied from their website. But this section is where funders assess risk.

Organisational credibility includes several elements. Governance: do you have an active, skilled board? Financial management: do your accounts show responsible stewardship? Track record: have you delivered similar projects before, and what were the results? Partnerships: are you working with other organisations, or operating in isolation? Safeguarding: do you have appropriate policies and training?

The charities that perform best here are the ones that can point to documented evidence of their track record. Not "we have been working with young people for 10 years" but "over the past three years, we have delivered 12 programmes reaching 340 young people, with an average attendance rate of 78% and measurable wellbeing improvements in 82% of participants." That specificity comes from having data.

Plinth's impact reporting generates exactly this kind of organisational track record data. Because outcomes, attendance, and programme data are recorded continuously, you can produce a credibility summary at any point — not just when a funder asks for one. This is particularly valuable for smaller charities applying to new funders who have no prior relationship with the organisation.

Track record is not about how long you have existed. It is about whether you can demonstrate that your approach works and that you can deliver what you promise.

Are You Telling a Story or Presenting a Case?

There is an important distinction between storytelling and evidence. Many grant writing guides encourage charities to "tell a compelling story" — and this advice, while well-intentioned, leads to applications that read like appeals rather than proposals. Funders are not donors being asked to give emotionally. They are investors assessing risk and return.

The best applications do both: they present a rigorous, evidence-based case and illustrate it with human stories. But the evidence comes first. A case study of a beneficiary whose life was transformed by your service is powerful — but only if it sits alongside data showing that this transformation is typical, not exceptional.

The structure that works is: claim, evidence, illustration. "Our mentoring programme improves school attendance (claim). Of our 2024 cohort, 31 of 38 participants showed improved attendance, with an average increase of 1.2 days per week (evidence). Amina, who was attending school two days a week when she joined, is now attending four days a week and has re-engaged with her GCSE coursework (illustration)."

Charities that collect impact data through tools like Plinth can produce this structure naturally. The data provides the evidence. The AI-generated case studies provide the illustration. The fundraiser's job is to assemble them into a coherent narrative — not to generate everything from memory and guesswork.

What Does a Rejection-Proof Application Look Like?

No application is truly rejection-proof — competition and funder budgets are beyond your control. But an application that avoids all six of the mistakes above is significantly more likely to succeed. Here is a summary comparison.

ElementWeak ApplicationStrong Application
Funder fitApplied broadly, hoping for the bestResearched funder's strategy, recent awards, and priorities
Evidence of impactAnecdotal feedback, vague claimsVerified outcome data with validated measurement tools
OutcomesAspirational ("raise awareness")Specific, measurable, time-bound
BudgetRound numbers, no justificationLine-by-line costing from real data
Organisational credibilityGeneric paragraph from websiteDocumented track record with statistics
NarrativeEmotional appeal without dataEvidence-based case illustrated with real stories

The common thread across every column is data. Strong applications are built on a foundation of systematically collected, well-organised programme data. If you are struggling to collect impact data, that is the first problem to solve — everything else follows from it.

How Can You Improve Your Success Rate Starting Today?

You do not need to overhaul your entire approach overnight. The following steps are ordered by impact and feasibility — start with the first and work down.

1. Audit your last five applications. Look at which ones were funded and which were rejected. Map each against the six mistakes above. Most charities find that two or three of the same issues recur across every rejection.

2. Start tracking outcomes now. Even basic pre-and-post surveys will dramatically strengthen your next application. Use Plinth's survey tools or a simple paper form — the medium matters less than the habit. The data you collect this quarter will power your applications next quarter.

3. Build a case study pipeline. Aim to capture one beneficiary story per month. Use voice recording and AI transcription to make this as low-effort as possible. Within six months, you will have a library of funder-ready stories.

4. Research before you write. Spend 30 minutes per funder reviewing their strategy, recent awards, and assessment criteria before deciding whether to apply. The Directory of Social Change and 360Giving's GrantNav are invaluable for this.

5. Use AI to draft, not to think. AI grant writing tools are excellent for assembling first drafts from your data. They are not a substitute for understanding what the funder wants. Use AI for the mechanical work — pulling data, structuring sections, formatting responses — and use your own judgement for the strategic decisions.

6. Ask for feedback. Many funders will provide brief feedback on rejected applications if you ask. This is the single most underused resource in charity fundraising. Most rejected applicants never request feedback — yet many funders are willing to provide it if asked.


Frequently Asked Questions

How many grant applications should a small charity submit per year?

There is no universal number, but quality matters more than quantity. The Chartered Institute of Fundraising recommends that small charities focus on 8–12 well-researched, carefully tailored applications rather than 30 generic ones. A higher success rate on 10 well-targeted applications delivers better results with less effort than a low success rate on 30 generic applications.

Should I use a grant writer or do it in-house?

For most small charities, in-house is preferable if you have the data to support it. External grant writers produce better prose, but they rarely have access to the detailed programme data that makes applications genuinely strong. The best approach is often in-house writing supported by AI tools that pull directly from your own data. If you do use an external writer, give them access to your outcome data — not just your project description.

Do funders care if I used AI to write the application?

No. The ACF's 2025 guidance confirmed that funders assess applications on their content, not their production method. What matters is that the information is accurate, the evidence is genuine, and the applicant takes responsibility for everything submitted. AI-assisted applications built from real data are often stronger than manually written ones because they are grounded in verified information rather than approximation.

Why do I keep getting rejected by the same funder?

Repeated rejection from the same funder usually signals a fundamental fit problem rather than a quality problem. Review their recent funded projects on 360Giving's GrantNav. If organisations like yours do not appear, the funder may not be the right target regardless of what their eligibility criteria say. If similar organisations do appear, request feedback on your previous applications to understand what specifically was missing.

How important is the budget section really?

Extremely important. Many applicants treat the budget as an afterthought, but funders read it carefully. A well-justified budget demonstrates that you understand your own costs, have planned the project realistically, and are asking for an appropriate amount. Budget quality is consistently ranked among the top three factors in grant decisions, alongside funder alignment and evidence of impact.

Can new charities with no track record win grants?

Yes, but you need to work harder on the evidence and credibility sections. Emphasise the experience of your team (even if gained at other organisations), any pilot data you have collected, partnerships with established organisations, and your theory of change. Starting to collect outcome data from day one — even from informal activities — gives you something concrete to reference. Some funders, like the Tudor Trust, explicitly prioritise newer organisations. The National Lottery Community Fund's smaller grants programmes are also more accessible to new organisations.

What is the biggest mistake charities make in grant applications?

Starting from a blank page every time. Charities that write each application from scratch, relying on memory for statistics and anecdotes for evidence, are at a systematic disadvantage compared to those that maintain a centralised, up-to-date repository of outcome data, case studies, and programme evidence. Building that repository is the single highest-return investment in your fundraising capacity. Plinth is designed to be exactly that repository.

How long after a rejection should I reapply?

Most funders allow reapplication after 6–12 months. Use the intervening time productively: collect the data you were missing, strengthen the areas flagged in feedback, and build the evidence base that was absent from your previous application. A reapplication that is substantially the same as the original will almost certainly be rejected again. A reapplication with six months of new outcome data and fresh case studies is effectively a new application.


Recommended Next Pages


Last updated: February 2026