Common Pitfalls in Measuring Impact (and How to Avoid Them)
The most common mistakes charities make in impact measurement — attribution errors, measuring outputs not outcomes, cherry-picking data — with practical fixes for each.
Most charities measure impact with good intentions and arrive at bad data. The mistakes are rarely deliberate. They are the result of measuring what is easy rather than what matters, of designing data collection around funder requirements rather than learning, of overclaiming outcomes because the pressure to demonstrate success is real and the methodological training to resist it is not.
These mistakes are serious. Weak impact measurement does not just produce reports that nobody reads — it actively undermines the sector's ability to learn what works. When charities routinely claim outcomes they cannot substantiate, funders become sceptical of impact data across the board. When evaluation is treated as a reporting task rather than a learning opportunity, the insights that could improve services go unnoticed.
The good news is that most impact measurement pitfalls are predictable. They follow recognisable patterns, have well-established causes, and — crucially — have practical fixes that do not require a team of evaluation specialists or a six-figure research budget. This guide works through the most common pitfalls and explains precisely what to do differently.
What you will learn:
- The seven most common impact measurement pitfalls in detail
- Why each mistake happens and why it is so persistent
- Practical fixes for each pitfall that teams of any size can implement
- How to build a measurement approach that generates genuine learning rather than just compliance data
Who this is for: Charity managers, M&E leads, trustees, and anyone responsible for impact measurement who suspects their current approach is not producing the insight it should.
Pitfall 1: Measuring Outputs Instead of Outcomes
This is the single most common mistake in charity impact measurement, and it is so pervasive that it has become almost invisible. Organisations count the things they do — sessions delivered, people attended, meals distributed, calls answered — and present these counts as evidence of impact. They are not.
Outputs are what you do. Outcomes are what changes as a result.
The difference matters enormously. A training programme that delivered 120 sessions to 450 participants over twelve months has strong output data. But whether any of those participants changed their behaviour, developed new skills, found employment, or improved their wellbeing — that requires outcome data, which most organisations do not collect consistently.
Despite growing funder emphasis on outcomes, most organisations continue to report primarily on outputs rather than the changes they produce. This is not because charity staff do not understand the distinction — most do, at least in theory. It is because outcome data is harder to collect than output data. Attendance registers are straightforward. Wellbeing scores taken at six-month follow-up, from a population that has now dispersed and may be difficult to contact, are not.
The fix: Map your outputs to the outcomes you expect them to produce, then design data collection around the outcomes rather than the outputs. For each programme activity, ask: "If this works, what specifically changes for the people involved, and when would we see that change?" That question generates an outcome and a measurement timepoint. Design your data collection to capture both.
Pitfall 2: Claiming Attribution When You Can Only Demonstrate Contribution
Attribution — the claim that your programme caused an outcome — is one of the hardest things to establish in social sector impact measurement. To genuinely attribute an outcome to a programme, you need to know what would have happened in the absence of that programme: the counterfactual. Without a randomised control trial or a rigorous quasi-experimental design, you cannot know this with confidence.
And yet charities routinely make attribution claims. "As a result of our programme, 78% of participants improved their mental health." Unless you know what would have happened to those same people without the programme, this claim is unjustifiable. Many of them might have improved anyway — because of natural recovery, because of other support they were receiving, or because of changes in their circumstances that had nothing to do with you.
This does not mean charities should never claim any causal relationship between their work and outcomes. But it does mean being precise about what you can and cannot demonstrate.
Contribution — the claim that your programme was likely one of the factors that contributed to an observed outcome — is usually what charities can honestly claim. The longer the timeframe between an intervention and an outcome, the harder attribution becomes and the more other factors come into play. A programme that supports people into employment over twelve months is working in a context of changing labour markets, family support, housing stability, and dozens of other variables. Claiming sole attribution for employment outcomes is almost never honest.
The fix: Adopt contribution language rather than attribution language. "Our programme contributed to improved outcomes for participants" is honest and defensible. "Our programme caused these outcomes" is usually not. Where you want to strengthen your causal claims, consider using a comparison group, running a pre/post analysis with a plausible counterfactual estimate, or — for mature programmes with stable delivery — commissioning an independent evaluation.
Pitfall 3: Cherry-Picking Data and Overclaiming on Outcomes
The pressure to demonstrate success is real. Funders reward organisations that can show positive outcomes. Charities that report mixed or negative results risk losing funding, even if those results reflect honest and rigorous measurement. The result is a well-documented pattern of selective reporting.
LSE researcher Julia Morley analysed 128 websites of charities and social purpose organisations and found that impact reports appeared to function primarily as reputation management rather than genuine evidence of effectiveness. The case studies charities selected were almost universally positive; the data presented was almost universally favourable; and there was little sign that the measurement process was generating genuine learning (Morley, 2016). This is the charity sector equivalent of publication bias in academic research — the positive findings get shared, the negative ones do not.
As the Charity Awards has noted, funders are drawn to organisations that routinely report success — which creates a perverse incentive for charities to be "guilty of publication bias, only revealing results which say something they want to hear, and of overclaiming on outcomes" (Charity Awards, 2019).
This is not a minor compliance failure. It degrades the quality of evidence across the whole sector. When funders cannot trust impact data, they fall back on proxy indicators — organisational size, reputation, track record — that further disadvantage newer and smaller organisations.
The fix: Design your measurement approach to capture disconfirming evidence, not just confirming evidence. If 78% of participants improve, what happened to the 22% who did not? Report on them too — and try to understand why their outcomes were different. Funders who are serious about evidence, including the growing number committed to IVAR's Open and Trusting principles, actively value this kind of honest reflection. Frame mixed results as learning rather than failure.
Pitfall 4: Designing Measurement Around Funders Rather Than Learning
Many charities design their measurement systems around what funders ask for rather than what the organisation genuinely needs to know. This is understandable — funders control the money, and funder requirements feel non-negotiable. But the result is measurement data that satisfies reporting requirements without generating the insights that could improve the programme.
The problem manifests in a specific way: charities collect the metrics their funders request (which tend to be outputs and short-term outcomes) and neglect the data that would be most useful for programme improvement (which tends to be process data, longer-term outcomes, and information about who is not being served well).
Julia Morley's research found "little evidence that charities were using the data they collected to deliver better results" — with impact reports functioning as marketing tools to legitimate the organisation rather than genuine feedback loops (Morley, 2016). This is a significant dysfunction: measurement that is costly in staff time and participant burden but produces no learning value.
The fix: Start measurement design with a learning question rather than a reporting requirement. Ask: "What do we most need to know about this programme to make it better?" Then design your measurement to answer that question. Funder requirements can usually be satisfied as a secondary output of a measurement framework designed for learning. The reverse — a measurement framework designed for reporting that then tries to generate learning — almost never works.
Pitfall 5: Collecting Data You Never Analyse
This pitfall is extremely common and rarely discussed. Charities dutifully collect attendance data, run beneficiary surveys, complete monitoring forms, and log case notes — and then almost never analyse any of it in a way that informs decisions.
The data sits in spreadsheets, case management systems, or filing cabinets. At the end of the grant period, staff scramble to extract the headline numbers they need for the funder report. The survey responses that could reveal what participants found most and least valuable go unread. The case notes that could surface patterns in beneficiary need are never systematically reviewed. The attendance data that could show which sessions are most engaged with is totalled but never disaggregated.
According to the Charity Digital Skills Report 2024, 31% of charities describe themselves as poor at or not engaging with collecting, managing, and using data — and a further 34% say the same about data-informed decision making (Charity Digital Skills Report, 2024). The data collection and the data use are both broken.
The fix: Build analysis into your reporting calendar, not just data collection. Schedule a quarterly data review — even one hour — where a small team looks at what the data is saying and asks whether anything should change as a result. If your data collection tools do not make analysis easy (because data is spread across spreadsheets, paper forms, and multiple systems), that is a systems problem that needs solving before the data collection problem. Purpose-built tools like Plinth store impact data in structured formats that make analysis straightforward.
Pitfall 6: Measuring Too Many Things Badly Rather Than Fewer Things Well
In an effort to satisfy multiple funder requirements and demonstrate impact across every aspect of a programme, many charities try to measure everything — and end up measuring nothing reliably. Measurement systems collapse under their own weight when frontline staff are asked to collect twenty different data points per beneficiary across multiple forms and systems.
The problem is not ambition — it is prioritisation. When everything is a priority, nothing is. Frontline staff who face an impossible data collection burden will cut corners, skip forms, or stop collecting altogether. The result is data that is incomplete, inconsistent, and unreliable — which is worse than limited but reliable data on a small number of outcomes.
The fix: Apply the "vital few" principle. For each programme, identify the three to five outcomes that matter most — the ones that, if achieved consistently, would justify the investment. Focus your measurement resources on those outcomes and measure them well: with appropriate tools, consistent timing, and enough follow-up to catch medium-term outcomes. Track outputs for accountability, but save your serious measurement capacity for the outcomes that matter. The rule of thumb from evaluation practice is: measure fewer things, measure them better, and actually use the data you collect.
Pitfall 7: Ignoring Who Is Not Being Reached
Impact measurement almost always focuses on the people a programme does reach. The people it does not reach — those who are eligible but never referred, those who start but drop out, those who engage minimally — are systematically excluded from the evidence base.
This creates a misleading picture. If your programme is most accessible to the least disadvantaged members of a target population — because they find it easier to navigate, are more confident engaging with services, or face fewer practical barriers — then your outcome data will overstate the programme's effectiveness for the population as a whole. This is a form of selection bias that is easy to miss and hard to correct.
The most rigorous impact assessments examine not just outcomes for programme participants but evidence about who is not being served and why. In the UK context, this connects directly to equity considerations — whether programmes are reaching the most disadvantaged and marginalised members of the communities they aim to serve.
The fix: Build non-participation tracking into your measurement approach. Who is referred but does not engage? Who starts but drops out and at what point? Who completes the programme but shows no outcome change? Each of these groups carries important information about what is and is not working. Even basic tracking of drop-out rates and reasons for non-engagement is more informative than an approach that only counts completers.
A Comparison: Common Approaches vs Better Practice
| Pitfall | Common Approach | Better Approach |
|---|---|---|
| Output vs outcome | Count sessions delivered | Measure what changed for participants |
| Attribution | "Our programme caused these outcomes" | "Our programme contributed to these outcomes" |
| Data selection | Share positive findings; suppress negatives | Report fully; frame negatives as learning |
| Measurement purpose | Design for funders | Design for learning; satisfy funders secondarily |
| Data use | Collect data; extract headline numbers at report time | Regular data review; analysis informs decisions |
| Measurement scope | Measure everything | Measure the vital few outcomes well |
| Reach | Track participants who complete the programme | Also track who drops out and who never engages |
How to Build an Impact Measurement Approach That Avoids These Pitfalls
The cumulative lesson of these pitfalls is that impact measurement is most likely to fail when it is designed as a reporting mechanism rather than a learning mechanism. The practical steps for building a stronger approach follow from that insight.
Before the programme starts: Define two to four outcome indicators that are directly linked to your theory of change, and that you have a realistic plan to measure reliably. Identify your measurement tools (validated scales are preferable to ad hoc questionnaires), your data collection touchpoints, and who is responsible for analysis.
During delivery: Collect outcome data at consistent points — typically at entry, midpoint (for longer programmes), and exit. Schedule quarterly data reviews. When something is not working, use the data to understand why rather than suppressing it.
At reporting time: Report fully — including variation across sub-groups, drop-out rates, and outcomes that were weaker than expected. Use appropriate contribution language. Include reflection on what was learned and what you would do differently.
Ongoing: Review your measurement approach at least annually. Are you measuring the right outcomes? Are the data collection tools working in practice? Is the data being used to make decisions? If not, change the approach.
Platforms like Plinth support this approach by building workplan tracking, KPI generation, and outcome reporting into grant management workflows — so that measurement is embedded in delivery rather than bolted on at the end.
FAQ
How do I measure outcomes if I cannot do a control group study?
Most charities cannot and do not need to run randomised control trials. Practical alternatives include pre/post measurement (comparing participants' scores before and after the programme), comparison with national benchmark data, or a contribution analysis that documents the evidence for why your programme is likely to have contributed to observed outcomes. Pre/post measurement is the most accessible and widely accepted approach for regular programme monitoring.
What are validated measurement tools and how do I choose one?
Validated measurement tools are questionnaires or scales that have been tested for reliability and validity — meaning they consistently measure what they claim to measure. Examples include the Warwick-Edinburgh Mental Wellbeing Scale (WEMWBS) for wellbeing, the SWEMWBS (short version), and the UCLA Loneliness Scale for social isolation. Choose a tool that is validated for your population, has a manageable number of items, and that your beneficiaries can complete without significant support. NPC and HACT both maintain toolkits of validated tools relevant to UK charities.
How many outcomes should I measure for a typical programme?
Two to four primary outcomes is a reasonable target for most programmes. Beyond that, data collection becomes burdensome and data quality deteriorates. You can track additional outputs for accountability purposes, but keep your serious measurement focused on a small number of outcomes that matter most.
What should I do if my outcome data is negative or mixed?
Report it honestly, and explain what you think it means. Negative or mixed results are not a failure of the programme — they may mean the programme is not working as intended, that it is working for some people but not others, or that your measurement approach is not capturing the outcomes accurately. Each of these is useful information. Funders who are committed to evidence-based practice will generally respond better to honest mixed results than to implausibly uniformly positive reports.
How do I balance funder requirements with a genuine learning approach?
In most cases, these are not in conflict. Funders increasingly ask for honest reflection alongside positive outcomes, and many explicitly state that they do not require perfect results. Where funder reporting templates ask only for output data, you can still collect and use outcome data internally. If a funder's requirements are genuinely incompatible with good practice — for example, if they require you to only report positive outcomes — that is worth a conversation with the funder directly.
What is the minimum viable impact measurement approach for a very small charity?
For a small charity with limited capacity, a minimum viable approach is: one to two validated outcome measures collected at the start and end of a programme, systematic tracking of who you reach and who drops out, and one quarterly hour to review what the data shows. This is better than a complex measurement framework that nobody implements. Start simple, and build from there.
How does AI help with impact measurement pitfalls?
AI tools help primarily with the analysis gap — the data that is collected but never used. AI can scan large volumes of case notes or survey responses to identify themes, flag patterns in outcome data, and alert teams to early warning signs (such as declining outcome scores or increasing drop-out rates) that would be easy to miss in manual review. Platforms like Plinth use AI to surface these patterns automatically, reducing the analysis burden on small teams.
Recommended Next Pages
- What Is Impact Measurement? — Foundations and frameworks for charities starting from scratch
- Case Studies vs Metrics: What Impact Reporting Really Needs — How to combine evidence types effectively
- Using AI to Spot Patterns in Impact Data — How AI surfaces insights from your data that humans miss
- Impact Measurement for Small Charities — Practical measurement approaches for organisations with limited capacity
- Why Charities Struggle to Collect Impact Data — Fixing the data collection problems that undermine your measurement
Last updated: February 2026