Measuring Grant Impact: A Step-by-Step Guide
A practical guide for charities on measuring the impact of grant-funded programmes — from defining outcomes and collecting data to communicating results to funders and donors.
Measuring the impact of a grant-funded programme is one of the most important — and most misunderstood — responsibilities in the charity sector. Done well, it gives your organisation evidence that your work makes a genuine difference to people's lives, strengthens your case for future funding, and identifies what is working so you can do more of it. Done poorly, it produces volumes of data that satisfy funder reporting requirements without telling you anything useful.
Most charities start from the wrong place. They wait until a grant has been awarded — or, worse, until the first reporting deadline approaches — before thinking about how they will measure impact. By that point, the baseline data that would have made comparison possible has not been collected. The outcome indicators that would have tracked meaningful change have not been defined. The data collection tools that would have embedded measurement into programme delivery have not been built.
This step-by-step guide starts from the beginning. It covers every stage of impact measurement for a grant-funded programme: from defining your outcomes before the programme starts, through collecting data consistently during delivery, to analysing and communicating results in ways that are honest, credible, and useful. Whether you are measuring impact for the first time or looking to improve an existing approach, this guide gives you a practical framework to follow.
Step 1: Define Your Outcomes Before the Programme Starts
The most common reason charities struggle to measure impact is that they have not defined what change they are trying to create with enough precision. Outputs — the activities and deliverables of a programme (workshops delivered, meals served, people supported) — are relatively easy to count. Outcomes — the changes in people's knowledge, skills, behaviour, circumstances, or wellbeing that result from those activities — are harder, but they are what impact measurement is actually about.
The distinction matters because funders increasingly want outcome evidence, not just output counts. When a funder asks "what difference did this grant make?" they are not looking for a list of activities delivered. They want to know whether participants' lives changed, how, and by how much.
Start by articulating your theory of change: the logical pathway from your inputs and activities to the outcomes you believe will result. This does not need to be complicated — a simple one-page diagram showing "we do X, which leads to Y, because Z" is often more useful than a complex map with dozens of boxes. For a detailed guide to theory of change, see our article on what is a theory of change.
From your theory of change, identify your intended outcomes for this specific grant-funded programme. Be specific. "Improved wellbeing" is not an outcome — it is a category. "Participants report reduced levels of social isolation, measured using the UCLA Loneliness Scale, with at least 60% showing improvement at programme completion compared to entry" is an outcome — specific, measurable, and connected to a validated instrument.
For each outcome, define one or two indicators: observable, measurable markers of progress towards that outcome. Good indicators are SMART — specific, measurable, attainable, relevant, and time-bound. They should be as few as possible while covering the most important changes. Many organisations are overambitious about what they plan to measure, resulting in unwieldy systems that produce large volumes of data of variable quality. Aim for the right amount of quality data rather than the maximum amount of data.
Step 2: Set a Baseline
Impact is measured as change — the difference between how things were before your programme and how they are after. Without a baseline, you cannot demonstrate change; you can only describe a current state. Yet baseline measurement is one of the most frequently skipped steps in charity impact measurement, usually because it requires data collection before the programme — and often before the grant has been confirmed — which feels premature.
A baseline captures the situation, knowledge, skills, or wellbeing of your participants at the point they enter your programme. For many outcomes, this involves a simple questionnaire or assessment at intake. For others, it might involve reviewing existing records (GP referral data, housing records, school attendance data) that describe participants' circumstances before your intervention.
The baseline does not need to be sophisticated to be useful. A five-question survey completed by participants in their first session, covering the specific dimensions you intend to measure, gives you a before picture that can be compared with an after picture at completion. Without it, you cannot attribute any improvement to your programme specifically.
Some outcomes are harder to baseline than others. Long-term outcomes — employment, housing stability, physical health — may take years to materialise, making within-programme measurement impossible. In these cases, be honest about what you can and cannot measure within the programme timeframe. Measure the intermediate outcomes (skills gained, confidence increased, knowledge improved) that your theory of change predicts will lead to the longer-term changes, and be clear that you are measuring leading indicators rather than the ultimate outcomes.
Step 3: Choose Your Data Collection Methods
There are two main categories of data collection method: quantitative (producing numerical data) and qualitative (producing accounts, descriptions, and meanings). Effective impact measurement uses both. Quantitative data tells you how many people improved and by how much; qualitative data tells you why, and gives the human texture that makes outcome data credible and compelling.
According to a survey of UK charities on impact measurement practices, internal evaluations are the most popular method (used by 91%), followed by quantitative evidence (70%) and case studies (69%). This reflects a sensible mixed-methods approach that combines systematic data with illustrative narrative.
Quantitative methods for grant impact measurement:
Participant surveys are the most common tool. Design your survey around your outcome indicators, using validated scales where they exist (such as the Warwick-Edinburgh Mental Wellbeing Scale for wellbeing outcomes, or the UCLA Loneliness Scale for social isolation). Collect at entry (baseline), at completion (or at regular intervals for longer programmes), and ideally at follow-up three to six months after completion.
Routine data collection captures activity data as a natural part of programme delivery — attendance records, session completion rates, referral tracking, case record updates. This data is less rich than survey responses but more reliable because it does not depend on participant willingness to complete a questionnaire.
Secondary data uses records that already exist outside your programme — school attendance, benefit claims, housing tenancy records — to track outcomes that your programme aims to influence. This is the most powerful form of evidence but also the most difficult to access, typically requiring data-sharing agreements with public sector bodies.
Qualitative methods for grant impact measurement:
Case studies are in-depth accounts of individual participants' experiences and outcomes. A good case study includes the person's situation before the programme, what they did during it, and how their circumstances or wellbeing changed as a result. Case studies require informed consent and should never be fabricated or heavily embellished. For guidance on collecting compelling case studies, see our guide on how to collect charity case studies.
Interviews and focus groups with participants, staff, and other stakeholders provide qualitative depth that surveys cannot. A short exit interview with five to ten participants at programme completion can reveal insights about what made the programme effective (or not) that no questionnaire could capture.
Staff observation and case notes recorded consistently during delivery provide a qualitative record of participant progress that can be reviewed and analysed for patterns. This requires case recording disciplines and, ideally, a case management system that makes it possible to review records across the participant group rather than one by one.
Step 4: Build Data Collection Into Programme Delivery
The most common failure mode in impact measurement is designing a data collection system that operates separately from programme delivery. Participants fill in a survey at the start and end; staff remember to enter session data when they have time; case notes are updated in batches. The result is incomplete, inconsistent data that is difficult to analyse and unreliable as evidence.
The alternative is embedding data collection into natural programme touchpoints. Registration becomes the baseline survey moment. Session attendance is recorded as part of the session register. Key milestones — a participant gets employment, moves to settled accommodation, completes a qualification — are recorded when they happen, not reconstructed at report time. Exit interviews happen as part of the programme's standard closing session, not as a separate research exercise.
This integration requires upfront design work and staff training, but it dramatically improves data completeness and quality. It also reduces the burden on participants, who are not asked to complete lengthy research instruments on top of their programme participation.
Digital systems make integrated data collection significantly more manageable. Case management software that allows staff to record interactions and outcomes during or immediately after service delivery, and that generates summary reports from those records, is far more reliable than any manual approach. Survey tools connected to your case management system can send automated follow-up surveys at the right intervals without manual tracking.
Plinth's AI grant management platform includes workplan and KPI generation tools that help you define outcome indicators at programme design stage and track progress against them throughout delivery — connecting the measurement framework directly to the programme record rather than treating them as separate processes.
Comparison: Common Impact Measurement Approaches
| Method | Best for | Strengths | Limitations |
|---|---|---|---|
| Participant surveys (before/after) | Wellbeing, skills, knowledge outcomes | Quantifiable, scalable, comparable | Requires baseline; response rates vary |
| Case studies | Communicating complexity of change | Rich, compelling narrative | Small sample; not representative |
| Routine data (attendance, completion) | Reach and engagement | Reliable, low burden | Measures outputs more than outcomes |
| Interviews and focus groups | Understanding why and how | Deep insight, surfacing unexpected findings | Time-intensive; small sample |
| Secondary data (admin records) | Long-term and systemic outcomes | Strong attribution, existing evidence | Access requires agreements; lag time |
| Staff observation/case notes | Ongoing tracking of individual progress | Continuous, contextualised | Quality depends on recording discipline |
Step 5: Analyse What Your Data Is Actually Telling You
Collecting data is not the same as learning from it. The analysis stage — comparing before and after, looking for patterns, and interpreting what the findings mean — is where impact measurement generates genuine value. It is also the stage most frequently skipped in favour of presenting raw numbers in reports.
At a minimum, impact analysis should answer three questions: Did outcomes improve? How much? And for whom?
Comparing pre- and post-programme scores on your outcome indicators tells you whether participants improved on average. But averages can obscure important differences. Did some groups improve more than others? Were there participants who did not improve, or whose situation worsened? Are there particular programme elements associated with better outcomes?
Disaggregating your data by relevant characteristics — age, gender, presenting need, referral route, programme element attended — often reveals the most actionable insights. A finding that female participants improved significantly while male participants did not, for example, points to a specific programme design question. A finding that participants who attended at least eight sessions achieved significantly better outcomes than those who attended fewer points to an attendance threshold worth communicating to referrers.
Be honest about what your data does and does not show. Attribution — the ability to claim that your programme caused the outcomes you measured rather than other factors — is genuinely difficult for most charity programmes, which lack the control groups that formal randomised evaluations use. Acknowledge this limitation explicitly, and frame your findings as "participants in our programme experienced the following outcomes" rather than "our programme caused the following outcomes." Honesty about the limits of your evidence is a marker of credibility, not weakness.
If some outcomes were not achieved, or the programme performed worse than expected in some areas, include this in your analysis. Funders and commissioners who are told only positive findings have no basis for trust. Those who receive honest accounts of what worked and what did not — and what the organisation plans to do differently — have a substantive basis for continued investment.
Step 6: Communicate Results to Funders
Grant reporting is not the same as impact measurement, but it is where impact measurement most often matters most immediately. A funder report that presents outcome evidence clearly, honestly, and in context of the programme's theory of change is more compelling than one that leads with outputs and buries outcomes in a footnote.
The core elements of a strong impact section in a grant report are:
Baseline and end-point. Show the before-and-after comparison, not just the after. "86% of participants reported reduced levels of social isolation at programme completion" is a weaker statement than "at intake, 73% of participants scored in the high-isolation range on the UCLA Loneliness Scale; at programme completion, that figure had fallen to 24%."
Reach. Who was the programme for, and did it reach those people? If your target group was long-term unemployed adults over 50 in your local authority area, your report should confirm that participants matched that profile, not just that you served a certain number of people.
What worked and what you learned. One paragraph honestly describing the elements of the programme that worked well, those that worked less well, and what you are doing about it is worth more than three pages of uncritical positive narrative.
Case study evidence. One or two well-written case studies that illustrate the kind of change you achieved — with participant consent and genuine detail — anchor the quantitative evidence in human reality. For guidance on collecting and writing these, see our guide on how to collect charity case studies.
Financial context. Your funder will also want to see that the grant was spent as agreed, with clear accounting of any variances. Keep the financial reporting separate from but adjacent to the impact reporting so both can be read together.
For charities managing multiple concurrent grants, the administrative burden of grant reporting can be substantial. As noted in Plinth research, charities collectively spend 15.8 million hours annually on funder reporting, with the average grant taking 40 hours to report on. Systems that connect programme data to report templates — so that the same outcome data can populate multiple reports without re-entry — can significantly reduce this burden. See our guide on reporting to multiple funders for practical approaches.
Step 7: Use Your Evidence for Future Funding
Impact evidence from a completed programme is not just a record of the past — it is the foundation for future funding applications. A charity that can demonstrate, with credible data, that its previous grant-funded work achieved the outcomes it intended to achieve is a significantly more attractive investment for funders than one that cannot.
Most grant applications ask for evidence of track record. The impact measurement process described in this guide produces exactly that evidence: before-and-after outcome data, participant reach figures, case study illustrations, and an honest account of what was learned. Organisations that measure impact well are, in effect, building their funding case with every programme they run.
Beyond individual funding applications, a body of impact evidence builds organisational credibility over time. Funders, commissioners, and statutory partners all respond to organisations that can speak confidently about what their programmes achieve. The investment in good impact measurement — in terms of staff time, data systems, and evaluation capacity — pays returns that extend well beyond the reporting requirements of the grant that funded it.
Common Mistakes to Avoid
Waiting until a reporting deadline to start measuring. Impact measurement must begin before the programme starts. Once delivery is underway without a baseline, before-and-after comparison becomes impossible.
Measuring only what is easy to count. Output metrics — numbers of sessions, numbers of participants — are easy to collect but do not demonstrate impact. Include at least one outcome indicator that captures genuine change in participants' lives.
Overclaiming causation. Unless you have a comparison group or a randomised design, you cannot prove that your programme caused the outcomes you measured. Be precise: participants experienced these outcomes during and after the programme, not because of it.
Omitting negative findings. Funders who read only positive outcomes cannot trust them. Including honest accounts of what did not work, with your analysis of why and your plans for improvement, builds the credibility that makes positive findings believable.
Collecting data that is never analysed. Data that is collected for reporting but never reviewed for organisational learning wastes the capacity invested in collecting it. Build analysis and reflection into your programme calendar, not just your report schedule. See our guide on learning loops and turning impact data into strategy for how to do this systematically.
Burdening beneficiaries with excessive measurement. Participants who are asked to complete lengthy questionnaires at every session, or who are contacted for follow-up interviews months after programme completion without adequate notice or support, experience impact measurement as an imposition rather than a genuine effort to understand their experience. Design for proportionality — collect what you need, collected in ways that respect participants' time and dignity.
FAQ
When should I start measuring impact?
Before the programme starts. The baseline — data about participants' situation, knowledge, or wellbeing before they engage with your programme — must be collected at intake. If you wait until mid-programme or end-of-programme to think about measurement, you will have no baseline to compare against and will be unable to demonstrate change.
How many outcome indicators do I need?
As few as possible while covering the most important changes you intend to create. Two to four well-defined, carefully measured outcome indicators are more valuable than ten poorly defined or inconsistently collected ones. Quality of data matters far more than quantity of metrics.
Do I need a control group to measure impact?
A control group — a comparison group of similar people who did not receive the programme — is the gold standard for demonstrating that your programme caused the outcomes you measured. It is also rarely practical for most charities. Without a control group, be honest that you are measuring outcomes experienced by participants, not impact attributed to your programme. This is still valuable and credible evidence; it is just not the same as a formal causal evaluation.
How do I handle outcomes that take years to materialise?
Focus on intermediate outcomes — measurable changes in knowledge, skills, confidence, or behaviour that your theory of change predicts will lead to longer-term outcomes. Be explicit with funders about the time horizon of your outcomes and the evidence you can reasonably provide within a grant period versus what would require longer-term tracking.
What validated outcome measures are available for free?
Several validated measurement tools are freely available to charities in the UK. The Warwick-Edinburgh Mental Wellbeing Scale (WEMWBS) is widely used for wellbeing outcomes. The UCLA Loneliness Scale measures social isolation. The Outcomes Star is a suite of tools for measuring progress across multiple life domains, widely used in homelessness, mental health, and family support contexts. The Social Outcomes Institute maintains a library of outcome measures mapped to common charity sectors.
How do I collect follow-up data after the programme ends?
Follow-up data collection is challenging because you lose contact with participants after programme completion. The most reliable approach is to ask for consent and contact details at intake, explain how you will use follow-up data, and send a brief survey at three and/or six months. Response rates for postal or email follow-up surveys are typically 20–40%; response rates for brief SMS or WhatsApp surveys can be higher. Build follow-up costs into your grant budget from the start.
Can AI help with impact measurement?
AI tools are increasingly useful for data analysis — identifying patterns in outcome data, flagging anomalies, and generating narrative summaries from structured datasets. They are less useful for the judgements that matter most: deciding what to measure, interpreting what findings mean, and deciding what to do differently in light of evidence. AI can reduce the mechanical workload of impact measurement; it cannot replace the human knowledge and programme understanding that makes measurement meaningful.
Recommended Next Pages
- What Is Impact Measurement? — The principles and frameworks behind measuring what your work achieves
- Learning Loops: Turning Impact Data Into Strategy — How to use the data you collect to improve programmes, not just report on them
- Impact Reporting in the AI Era — How AI is transforming the way charities and funders communicate impact
- How to Collect Charity Case Studies — Practical guidance on gathering compelling beneficiary stories with consent
- Reporting to Multiple Funders — How to manage the reporting burden when you have several active grants simultaneously
Last updated: February 2026