How to Prove Your Charity's Impact to Funders
Practical guide to measuring and presenting charity impact for UK funders. What to measure, how to collect it, and how to turn data into funding.
Funders want evidence. That much is obvious. But what most charities get wrong is how much evidence funders actually need — and what kind. The Association of Charitable Foundations (ACF) has consistently highlighted that funders cite "lack of clear evidence of need" as one of the most common reasons for rejecting grant applications. Yet funders are not expecting academic-grade evaluation or complex statistical analysis. They want clear, honest, proportionate evidence that your work makes a difference.
The problem is not that charities lack impact. It is that they cannot demonstrate it when asked. Funders increasingly expect to see outcome data in applications, but many UK charities still lack a documented outcomes framework. The gap between what funders ask for and what charities can provide is where funding is lost — not because the work is not good, but because the evidence is scattered, inconsistent, or simply not collected.
This guide is a practical walkthrough of what funders actually want to see, how to measure it without drowning in data, and how to present your impact in a way that builds funder confidence. The charities that do this well are not the ones with the biggest evaluation budgets. They are the ones that build evidence collection into their everyday operations.
What you will learn
- What funders actually expect to see in terms of impact evidence (it is less than you think)
- How to set up a proportionate measurement framework that does not overwhelm your team
- How to turn raw data into compelling evidence — outcome summaries, case studies, and trend analysis
- How centralising your data transforms funder reporting from a crisis into a routine task
Who this is for
- Charity managers and programme leads responsible for demonstrating impact
- Fundraisers who need evidence to strengthen grant applications
- Small and medium charities that feel overwhelmed by impact measurement expectations
- Trustees wanting to understand what good impact reporting looks like
What Do Funders Actually Want to See?
The first thing to understand is that most funders are not expecting a randomised controlled trial. They are expecting clear, honest answers to three basic questions: What did you do? What changed? How do you know?
ACF's Stronger Foundations initiative emphasises that the majority of UK trusts and foundations are looking for proportionate evidence — meaning the depth and rigour of your measurement should match the size and nature of your work (ACF, Stronger Foundations). A £10,000 community project does not need the same evaluation framework as a £2 million national programme.
In practical terms, funders typically want to see five things:
- Outputs — What you delivered. Number of sessions, participants, hours of support, etc.
- Outcomes — What changed. Measurable shifts in the people or communities you serve.
- Beneficiary voice — Direct feedback from the people you support, ideally including qualitative stories.
- Reach and demographics — Who you served, particularly whether you reached the people the funding was intended for.
- Learning — What you discovered, including what did not work and how you adapted.
Notice what is not on this list: sophisticated statistical analysis, external evaluation reports, or academic-quality research. Those are welcome for large programmes, but they are not expected for the vast majority of funded charity work. The National Lottery Community Fund's own guidance states: "We are more interested in what you have learned than in perfect data."
How Do You Choose What to Measure?
Choosing the right metrics is where most charities either over-complicate things or give up entirely. The answer is simpler than you think: measure the change you are trying to create, and nothing more.
Start with your theory of change. If your mentoring programme aims to improve young people's confidence and employability, measure confidence and employability. Do not also measure school attendance, family relationships, community cohesion, and mental wellbeing unless those are genuine programme objectives. According to New Philanthropy Capital (NPC), the most common mistake in impact measurement is trying to measure everything, which leads to data overload, survey fatigue, and ultimately less useful evidence than a focused approach.
A practical framework for most charity programmes:
| Category | What to measure | How to collect it | Example |
|---|---|---|---|
| Outputs | Activities delivered, people reached | Attendance records, case management data | 156 mentoring sessions delivered to 47 young people |
| Short-term outcomes | Immediate changes in knowledge, skills, or attitudes | Pre/post surveys, self-assessment tools | 34% average improvement in self-reported confidence |
| Medium-term outcomes | Behavioural changes or sustained progress | Follow-up surveys, tracking data | 72% of participants enrolled in employment or education within 6 months |
| Beneficiary experience | Qualitative feedback and stories | Case studies, feedback forms, recorded conversations | Detailed case study showing individual journey |
| Reach | Demographics and target group coverage | Registration data, monitoring forms | 68% of participants from the 20% most deprived areas |
The key insight from organisations like Inspiring Impact and NPC is that three to five well-chosen metrics consistently measured will give you more useful evidence than twenty metrics sporadically collected. Quality and consistency matter far more than comprehensiveness.
Funders would much rather see a charity that has tracked three outcomes really well over two years than one that claims to measure fifteen things but cannot show the data. Depth and consistency of measurement matters far more than breadth.
How Do You Collect Impact Data Without Overwhelming Your Team?
This is the question that keeps charity managers awake at night. You know you need data. You also know your team is already stretched thin. The Charity Digital Skills Report 2025 found that staff capacity remains one of the primary barriers to better digital and data practice across UK charities (Charity Digital Skills Report 2025). The solution is not to add more work — it is to embed data collection into the work you are already doing.
Embed measurement in delivery. The most effective approach is to make data collection part of your programme activities, not a separate task. Registration forms capture demographic data. Session registers capture attendance. A five-minute conversation at the end of a programme captures outcomes. None of these require additional time set aside for "evaluation."
Use technology to reduce manual effort. Paper registers that need to be manually entered into spreadsheets are the single biggest time drain in charity data collection. Plinth's case management tools let you capture attendance digitally — including photographing paper registers and having AI extract the data automatically. What used to take 45 minutes of data entry per session takes seconds.
Collect case studies as you go. The classic charity mistake is realising you need case studies three days before a funder deadline and then scrambling to find a beneficiary willing to share their story. Instead, build case study collection into your regular workflow. Plinth's AI case notes let frontline workers record a brief conversation with a beneficiary — with consent — and automatically generate a structured, anonymised case study. Over a year, you build a library of ready-to-use stories without ever having a dedicated "case study collection day."
Keep surveys short. Survey research consistently shows that completion rates drop significantly when surveys exceed ten questions. A well-designed pre/post survey with five to seven questions will give you robust outcome data. Plinth's survey tools make it straightforward to design, distribute, and analyse short outcome surveys that participants actually complete.
Automate what you can. Outcome summaries, trend charts, and programme-level reports should not require manual calculation. When your data is in a single system, generating a quarterly outcome summary should take minutes, not days.
What Is the Difference Between Outputs, Outcomes, and Impact?
These three terms are used constantly in funder conversations, and charities frequently conflate them. Getting the distinction right matters because funders assess them differently.
Outputs are what you deliver — the activities, sessions, products, and services your charity provides. Examples: 200 counselling sessions delivered, 45 young people enrolled, 12 community events held. Outputs prove you did the work, but they do not prove the work made a difference.
Outcomes are the changes that result from your outputs — the measurable differences in the people or communities you serve. Examples: 78% of counselling clients reported reduced anxiety symptoms, 34% improvement in participant confidence scores, 15 young people secured employment. Outcomes are the core of what funders want to see.
Impact is the broader, longer-term change that your outcomes contribute to — often at a societal level. Examples: reduced youth unemployment in the borough, improved community mental health. Impact is harder to measure directly and is usually attributed rather than proven. Most funders understand this and do not expect charities to prove impact at this level for individual programmes.
The practical implication: focus your measurement effort on outcomes. Track outputs because they are easy to collect and funders expect them. Reference impact in your narrative to show you understand the bigger picture. But outcomes are where your evidence will be won or lost.
The practical consensus across the sector is clear: if you can show that your outputs happened and your outcomes changed, most funders will be satisfied. The leap to proving long-term impact is a bonus, not a requirement.
How Do You Turn Raw Data into Compelling Evidence?
Having the data is only half the job. The other half is presenting it in a way that builds funder confidence. Raw numbers in a spreadsheet do not tell a story. Framed evidence does.
Combine quantitative and qualitative evidence. Numbers show scale and change. Stories show meaning and human experience. The strongest funder evidence uses both together. "47 young people completed our mentoring programme" is an output. "47 young people completed our mentoring programme, with a 34% average improvement in confidence scores — and here is what that meant for Amira, who is now studying engineering at university" is evidence that sticks.
Show trends, not just snapshots. A single year of data is useful. Three years of data showing consistent or improving outcomes is compelling. Funders want to see trajectory — are your results stable, improving, or declining? Trend data from your impact reports demonstrates organisational learning and programme maturity in a way that a single data point cannot.
Be honest about what did not work. Counterintuitively, acknowledging challenges strengthens your credibility. ACF's Stronger Foundations work emphasises that funders value honesty about difficulties more than a flawless success narrative. Showing that you identified a problem, adapted your approach, and improved your outcomes demonstrates exactly the kind of reflective practice funders want to support.
Use comparisons where possible. Your data becomes more meaningful when contextualised. If the national average employment rate for your target demographic is 42%, and your programme achieves 67%, that comparison tells a far stronger story than the number alone. The Office for National Statistics and Public Health England publish extensive datasets that can provide benchmarks for common charity outcomes.
Tailor evidence to each funder. Different funders care about different things. A health funder wants health outcomes. An employment funder wants progression data. A community funder wants reach and demographic data. When all your data lives in one place, generating tailored evidence summaries for each funder becomes routine rather than a major undertaking. Plinth's impact reporting is designed for exactly this — one dataset, multiple reports, each structured to what the specific funder needs.
What Are the Most Common Impact Measurement Mistakes?
Charities make predictable errors in impact measurement, and most of them are avoidable. Here are the ones that most commonly undermine funder confidence.
Measuring only outputs. "We delivered 300 sessions" tells a funder nothing about whether those sessions achieved anything. The Charity Commission emphasises that effective charities distinguish between activity and achievement — and funders increasingly expect the same distinction.
Collecting data you never use. Many charities collect extensive monitoring data to satisfy a previous funder requirement and then never look at it again. If data is not informing your decisions or your funder reports, stop collecting it and redirect that effort to data you will actually use.
Waiting until reporting time to collect evidence. Retrospective data collection is unreliable and stressful. Beneficiaries have moved on, staff memories are imprecise, and the resulting evidence is weaker than real-time collection. Evaluation Support Scotland recommends real-time data collection as a core principle — evidence gathered at the point of delivery is more reliable and more credible to funders than retrospective accounts (Evaluation Support Scotland).
Inconsistent measurement across years. Changing your metrics, survey questions, or collection methods every year makes trend analysis impossible. Choose your framework, stick with it for at least three years, and resist the temptation to redesign every time a new funder asks a slightly different question.
Not capturing beneficiary voice. Numbers without stories feel sterile. Stories without numbers feel anecdotal. You need both. Beneficiary voice is not optional — funders consistently identify it as among the most persuasive forms of evidence in grant applications and reports.
The charities that impress funders most are not the ones with the most data. They are the ones that can clearly explain what changed and show evidence from the people they served. That combination — clear narrative plus beneficiary-level evidence — is surprisingly rare, and it is what separates the strongest applications from the rest.
How Does Centralised Data Change Funder Reporting?
The single biggest operational change a charity can make to improve its impact evidence is centralising its data. The reason is straightforward: when your data is scattered across spreadsheets, email inboxes, shared drives, and individual staff members' notebooks, producing evidence for a funder becomes an archaeological dig rather than a reporting task.
Consider what happens at a typical charity when a funder report is due. The programme manager emails three colleagues asking for attendance figures. Someone searches their laptop for the survey results from six months ago. Another person tries to remember which beneficiary consented to their story being shared. The finance team provides budget figures in a different format from what the funder requested. The whole process takes days, involves multiple people, and produces a report that is assembled under pressure rather than generated with confidence.
Now consider the alternative. All attendance data, outcome surveys, case studies, beneficiary records, and programme information live in one platform. When a funder report is due, you generate it. The system pulls the relevant data, structures it according to the funder's requirements, and produces a draft that staff review and approve. This is not aspirational — it is how charities using centralised platforms like Plinth actually operate.
The practical benefits are significant:
| Aspect | Decentralised data | Centralised data (e.g. Plinth) |
|---|---|---|
| Time to produce funder report | 3-5 days | 2-4 hours |
| Staff involved | 3-5 people across teams | 1 person reviews AI-generated draft |
| Risk of errors | High — manual compilation | Low — data pulled from verified source |
| Ability to tailor for different funders | Limited — each report built separately | Built-in — impact reporting generates tailored versions |
| Historical trend data | Difficult — requires finding old files | Automatic — data accumulates over time |
| Case study availability | Dependent on who collected them | Central library of AI-generated case studies |
Charities with centralised data systems consistently report spending significantly less time on funder reporting than those using fragmented tools. That time goes back to delivery and frontline work.
How Do You Build an Impact Measurement Framework From Scratch?
If your charity does not currently have a structured approach to impact measurement, here is a practical, step-by-step process to build one without hiring a consultant or taking on a major project.
Step 1: Define your outcomes (week 1). For each programme, answer: "What changes do we want to see in the people we work with?" Write down three to five specific, measurable outcomes. Use the formula: "[Target group] will [specific change] as a result of [programme activity]." For example: "Young people aged 16-24 will report increased confidence in job applications after completing our employability workshops."
Step 2: Choose your indicators (week 1-2). For each outcome, decide how you will know if it has happened. This is your indicator. Confidence might be measured by a self-assessment score. Employment outcomes might be measured by a follow-up survey at three months. Keep it simple — one or two indicators per outcome is enough.
Step 3: Set up your data collection tools (week 2-3). Design a short pre/post survey for your core outcomes. Set up digital attendance tracking through your case management system. Create a consent form for case study collection. These tools should work within your existing programme delivery, not alongside it.
Step 4: Collect for one full programme cycle (months 1-6). Run your measurement framework through one complete programme cycle. Do not worry about getting it perfect. The first cycle is about establishing the habit and identifying what works. According to Inspiring Impact, most charities refine their framework significantly after the first cycle — and that is expected.
Step 5: Generate your first evidence summary (month 6). Use your collected data to produce an outcome summary for the programme. If you are using Plinth, the impact reporting feature will generate this from your stored data. If not, compile your output numbers, outcome survey results, and one or two case studies into a simple report.
Step 6: Use it (ongoing). Put your evidence into your next grant application. Share your outcome summary with your board. Include a case study in your newsletter. The more you use your evidence, the more your team will see the value of collecting it — and the more natural the process becomes.
The whole process can be completed by one person alongside their existing role. You do not need an evaluation officer, an external consultant, or a dedicated budget. You need a clear framework, the right tools, and the discipline to collect consistently.
What Tools Help Charities Measure and Report Impact?
The tool landscape for charity impact measurement has matured significantly in recent years. Here is an honest assessment of the main options.
Spreadsheets (Excel/Google Sheets). Still the most commonly used tool, particularly in small charities. Free and flexible, but fragile — dependent on one person's spreadsheet skills, difficult to scale, and impossible to generate automated reports from. Spreadsheets remain the most widely used data tool among charities, though satisfaction rates are low and the Charity Digital Skills Report consistently highlights poor data management as a sector-wide challenge.
Dedicated impact measurement platforms. Tools like Lamplight, Salesforce Nonprofit, and Upshot offer structured data collection and reporting. They vary significantly in cost, complexity, and suitability for different charity sizes. Some require substantial setup and training.
Integrated platforms with AI. Plinth combines case management, outcome tracking, survey tools, AI-generated case studies, and impact reporting in a single platform. The advantage is that all your data feeds into one place, and AI tools can generate evidence — reports, case studies, outcome summaries — on demand. The free tier makes it accessible to charities that cannot commit budget to impact measurement software.
External evaluators. For larger programmes (typically £100,000+), commissioning external evaluation remains valuable. The UK Evaluation Society's directory lists qualified evaluators, and many funders will include evaluation costs in the grant budget if you ask. However, for the majority of charity programmes, proportionate internal measurement is sufficient and far more sustainable.
The right tool depends on your size, budget, and ambitions. But the principle is the same regardless: the easier you make it to collect and retrieve data, the stronger your evidence will be when you need it.
Frequently Asked Questions
How much impact data do funders actually expect?
Most funders expect proportionate evidence — meaning the depth of your data should match the size of your programme and the amount of funding. For grants under £50,000, basic output data, a simple outcome measure, and one or two case studies will typically satisfy requirements. Larger grants may require more detailed frameworks, but even then, funders prioritise clarity over volume.
What if my charity has never measured outcomes before?
Start small. Pick one programme, define three outcomes, and set up a simple pre/post survey. One programme cycle of data is enough to begin demonstrating impact. The important thing is to start collecting now rather than waiting until you have a perfect framework. Plinth's survey tools are designed to make this first step as straightforward as possible.
Do funders accept qualitative evidence or only numbers?
Funders want both, and most will say explicitly that qualitative evidence — particularly beneficiary voice — is as important as quantitative data. The Joseph Rowntree Foundation, National Lottery Community Fund, and numerous ACF members have published guidance confirming that stories, quotes, and case studies are valued alongside numerical outcome data. The strongest applications weave both together.
How often should we collect impact data?
Collect output data (attendance, participation) at every session or interaction. Collect outcome data at meaningful intervals — typically at the start and end of a programme cycle, with follow-up at three to six months if feasible. Case studies can be collected on a rolling basis throughout the year. Plinth's AI case notes make it possible to capture stories in minutes during routine conversations with beneficiaries.
What is the difference between monitoring and evaluation?
Monitoring is the ongoing collection of data about your activities and outputs — tracking what you are doing and who you are reaching. Evaluation is the periodic assessment of whether your activities are achieving the intended outcomes and impact. Both matter for funders. Monitoring provides the raw data; evaluation provides the analysis and learning. Most small charities need robust monitoring and light-touch evaluation rather than full external evaluation.
How do we measure outcomes for hard-to-quantify work?
Some outcomes — like increased confidence, reduced isolation, or improved wellbeing — feel inherently difficult to quantify. But validated tools exist for most of these. The Warwick-Edinburgh Mental Wellbeing Scale (WEMWBS) is widely used and freely available for charities. The Outcomes Star framework covers multiple domains. For bespoke outcomes, simple Likert-scale self-assessments (e.g., "On a scale of 1-10, how confident do you feel about...") collected before and after your programme provide perfectly adequate evidence for most funders.
Can we use the same data for multiple funders?
Absolutely — and you should. Maintaining one comprehensive dataset and generating tailored reports for each funder is far more efficient than collecting different data for different funders. Plinth's impact reporting is built around exactly this principle: one underlying dataset, multiple funder-specific reports generated on demand. This approach also ensures consistency — different funders see the same underlying truth, presented in the format they prefer.
What should we do if our outcomes data shows negative results?
Report them honestly. Funders overwhelmingly prefer transparency to spin. ACF's Stronger Foundations initiative emphasises that funders value honesty about challenges more than a perfect success narrative. Frame negative results as learning: what did you discover, what did you change, and what happened next? This demonstrates reflective practice and organisational maturity — exactly what funders want to invest in.
Recommended Next Pages
- What Is Outcome Measurement for Charities? — foundations of measuring what matters
- Why Charities Struggle to Collect Impact Data — common barriers and how to overcome them
- How to Report to Multiple Funders Without Losing Your Mind — practical approaches to multi-funder reporting
- How to Collect Charity Case Studies That Funders Love — building a case study library that strengthens every application
- Impact Measurement for Small Charities — proportionate approaches for organisations with limited resources
Last updated: February 2026