Impact measurement burden: are we measuring ourselves to death?
UK charities face a proliferation of funder-imposed reporting frameworks, spending disproportionate time proving impact rather than delivering it. The case for proportionate, shared measurement.
The debate in brief
Every funder wants to know what difference its money made. That is entirely reasonable. What is not reasonable is a system in which each funder requires a different reporting framework, different outcome indicators, different data collection tools, and different reporting timescales — forcing charities to run parallel measurement systems that consume significant staff time, specialist expertise, and financial resources. The result is a sector that spends a disproportionate share of its energy proving it is doing good rather than actually doing it.
The irony is acute. Outcomes frameworks that were designed to improve accountability and learning have, in many cases, become compliance exercises that cost more to administer than the insights they generate. A small charity running a youth mentoring programme may be required to report against one funder's outcomes star, another's theory of change template, a third's bespoke logic model, and a local authority's payment-by-results indicators — all for the same project, measuring broadly the same thing in incompatible formats.
Quick takeaways
| Question | Answer |
|---|---|
| How much do charities spend on impact measurement? | NPC has estimated that monitoring and evaluation typically absorbs 5-10% of programme costs, but where multiple frameworks are required this can rise significantly higher, particularly for small organisations. |
| Why do funders require different frameworks? | Each funder has its own governance requirements, its own theory about what matters, and its own internal reporting systems. There is no shared standard, so each imposes its own. |
| Does measurement actually improve services? | It can, when done well. But IVAR and NPC research suggests that much funder-required reporting is used for accountability rather than learning, and rarely feeds back into service improvement. |
| Are small charities disproportionately affected? | Yes. NCVO data shows organisations with income under 100,000 pounds rarely have dedicated evaluation staff and must divert frontline workers or senior leaders to meet reporting requirements. |
| What alternatives exist? | Shared measurement initiatives (Inspiring Impact), proportionate approaches (IVAR), and sector-specific frameworks (Centre for Youth Impact, What Works centres) all offer routes to better practice. |
| Is this getting better or worse? | Mixed. Awareness has increased, but the proliferation of outcomes-based commissioning and payment-by-results contracts has added new layers of measurement burden since 2015. |
The arguments
The case for rigorous measurement
The demand for impact evidence did not emerge from nowhere. It was a response to real problems. For decades, much of the charity sector operated on the assumption that good intentions were sufficient. The shift toward outcomes-based thinking, driven by funders like the National Lottery Community Fund, NPC, and the broader What Works movement, was an attempt to bring rigour to a sector that sometimes lacked it.
NPC's work on theory of change, dating from the mid-2000s, gave charities a structured way to articulate how their activities lead to outcomes. The What Works centres, established from 2013 across policy areas including education (Education Endowment Foundation) and crime reduction, with the Centre for Ageing Better following in 2015, built an evidence infrastructure that the sector had previously lacked.
The accountability argument is legitimate. Charities collectively receive billions in public donations, government contracts, and grant funding. The Charity Commission's public trust research consistently shows that donors want to know their money makes a difference. Funders have their own accountability chains — trustees, government commissioners, corporate boards — and these require evidence. The question is not whether to measure, but how.
The case that measurement has become disproportionate
The problem is not measurement itself but the way it is implemented. The sector has developed what IVAR has described as a "compliance culture" around impact reporting, where the primary purpose of data collection is satisfying funders rather than improving services. When a frontline youth worker spends two hours per week completing outcomes forms instead of working with young people, something has gone wrong.
The Centre for Youth Impact has shown that many youth organisations collect data they never use, in formats designed for funders rather than practitioners, using tools that do not capture what staff and young people consider most important about the work.
The cost is not trivial. For a small charity with annual income of 200,000 pounds, dedicating even 10% of programme budgets to evaluation means 20,000 pounds. When multiple funders each require separate reporting, the cumulative burden can push measurement costs well beyond what any single funder budgets for. Critically, most funders do not fund evaluation costs at the level they require it. The cost is absorbed by the charity, typically at the expense of delivery.
NPC has acknowledged this tension, finding that while awareness of impact measurement had increased substantially, quality of practice remained patchy. Many organisations were going through the motions — collecting data, completing reports — without the analytical capacity to use the data for learning.
The proliferation problem
The deepest structural issue is that there is no shared measurement language across the sector. A charity working on homelessness may need to report against the Homelessness Outcomes Star for one funder, a local authority's outcomes framework for another, a bespoke logic model for a trust, and national rough sleeping statistics for a government contract. These frameworks overlap substantially but are not interoperable.
Inspiring Impact, the sector-wide initiative launched in 2012 with backing from NPC, NCVO, and other infrastructure bodies, was designed to address this. Its shared measurement approach encouraged organisations working on similar issues to use common indicators and tools. The ambition was sound, but adoption remained limited. Funders continued to impose their own frameworks, and charities continued to comply because funding depended on it.
The power dynamics matter. A charity cannot tell a funder that its reporting framework is wasteful, even when it is, because that funder holds the money. IVAR's research on funder-charity relationships has documented this extensively: charities self-censor their concerns about reporting burden because they fear jeopardising their funding.
The evidence
The evidence base on measurement burden is less quantified than one might expect — an irony not lost on the sector. There is no comprehensive figure for how much UK charities collectively spend on impact measurement, partly because the cost is distributed across staff time, consultancy fees, and software licences in ways that are difficult to aggregate.
What the evidence does show is that the burden is real and unevenly distributed. NCVO's UK Civil Society Almanac data demonstrates that smaller organisations have less capacity to absorb administrative requirements. Of the approximately 170,000 charities in England and Wales, the vast majority have no dedicated monitoring and evaluation staff. Impact measurement is done by the same people who deliver services, manage finances, and write funding applications.
NPC's research has documented a gap between aspiration and practice. Large, well-resourced organisations can afford evaluation specialists and commission external evaluations. Small organisations complete funder reports with whatever tools they have and rarely analyse the data they collect.
The Centre for Youth Impact's work on shared measurement in the youth sector found that organisations were frequently collecting data they did not use, in formats that did not serve them, because funders required it. Their push for "measurement for learning" rather than "measurement for accountability" reflects a growing consensus that the current system serves neither purpose well.
IVAR's Open and Trusting Grant-making initiative, now with over 170 funders signed up, includes commitments to proportionate reporting — asking only for information they will actually use, accepting narrative reporting where appropriate, and reducing the volume of monitoring requirements. But these commitments are voluntary and unevenly implemented.
Current context
The measurement burden debate sits within a broader context of sector strain. Rising costs, increased demand, and tightening funding — what NCVO's Road Ahead reports have called the "big squeeze" — mean that every pound spent on compliance is a pound not spent on delivery. The employer National Insurance contribution increase from April 2025, estimated to add 1.4 billion pounds in costs across the sector, has made administrative overheads more painful.
The outcomes-based commissioning agenda has not retreated. Government contracts increasingly tie payment to measured outcomes, and payment-by-results contracts have been criticised by NPC, NCVO, and others for imposing measurement costs disproportionate to contract values, particularly for smaller providers.
There are signs of a shift in funder thinking. The National Lottery Community Fund has simplified reporting for smaller grants. Several major foundations have moved to lighter-touch monitoring. The What Works movement has matured enough to recognise that proportionality matters and that qualitative evidence has value. The Centre for Youth Impact's advocacy for learning-centred measurement is gaining traction, as is IVAR's argument that trust-based grantmaking should extend to how outcomes are reported. But systemic change requires funders to coordinate — to agree on common indicators and accept shared reporting formats. That coordination remains the missing piece.
Last updated: April 2026
What this means for charities
For charities, the practical implications are about negotiation and efficiency. Organisations should be honest with funders about the cost of measurement — including in grant budgets. If a funder requires a specific outcomes framework, the cost of implementing it should be included in the grant application as a legitimate programme cost, not absorbed as an unfunded overhead.
Where charities have multiple funders for the same programme, they should proactively propose a single reporting format that meets all funders' core needs, rather than passively accepting parallel requirements. Some funders will agree to this; others will not. But the conversation itself is useful, because it surfaces the cost that funders may not realise they are imposing.
Charities should also distinguish between measurement for learning and measurement for compliance. Data collection that genuinely helps an organisation understand and improve its work is worth doing regardless of funder requirements. Data collection that exists solely to fill in a report template is a cost to be minimised. Building internal evaluation capacity that serves the organisation's own needs, and then adapting outputs for funder reports, is more efficient than building measurement systems around each funder's requirements.
Infrastructure bodies like NPC, NCVO, and the Centre for Youth Impact offer frameworks and guidance that can help organisations develop proportionate measurement approaches.
Common questions
How much does impact measurement cost charities?
There is no single sector-wide figure, but NPC's guidance suggests that monitoring and evaluation typically costs 5-10% of programme budgets when done properly. For charities managing multiple funding streams with different reporting requirements, the aggregate cost can be substantially higher. The hidden cost is often staff time: frontline workers and managers spending hours on data entry, report writing, and outcome tracking that is not reflected in formal evaluation budgets.
Why do funders not accept each other's reports?
The core issue is that each funder has developed its own framework, often reflecting its particular theory of change, strategic priorities, and governance requirements. There is no sector-wide standard, and the institutional costs of switching to a shared system — retraining staff, redesigning internal processes, renegotiating with trustees — are borne by the funder, while the costs of the current fragmented system are borne by charities. The incentives are misaligned.
What is the difference between outputs, outcomes, and impact?
Outputs are what an organisation does: the number of sessions delivered, people reached, or meals served. Outcomes are the changes that result: improved wellbeing, increased confidence, reduced reoffending. Impact is the broader, long-term difference the work makes, net of what would have happened anyway. Most funder reporting focuses on outputs and short-term outcomes because these are easiest to measure. Genuine impact measurement — establishing causality and attribution — is expensive, methodologically demanding, and often unrealistic for small programmes.
Is qualitative evidence taken seriously by funders?
Increasingly, yes, though unevenly. IVAR's Open and Trusting Grant-making commitments include accepting narrative reporting. NPC and the Centre for Youth Impact have both argued that stories of change, case studies, and practitioner reflection are legitimate forms of evidence when used alongside quantitative data. However, many funders — particularly government commissioners — still default to numerical indicators because they are easier to aggregate and compare. The cultural shift toward valuing qualitative evidence is real but incomplete.
What is Inspiring Impact?
Inspiring Impact was a sector-wide initiative launched in 2012 with the goal of making high-quality impact measurement the norm across the UK voluntary sector by 2022. Backed by NPC, NCVO, and other infrastructure bodies, it promoted shared measurement approaches and developed resources to help organisations measure their impact proportionately. While it raised awareness and produced useful tools, it did not achieve the systemic adoption of shared frameworks that was its core ambition. The initiative highlighted both the appetite for better practice and the difficulty of achieving coordination across a fragmented sector.
Can technology solve the reporting burden?
Technology can help but is not a solution in itself. Impact management platforms, shared databases, and automated reporting tools can reduce the administrative cost of data collection and report generation. But technology cannot resolve the underlying problem, which is that funders require different things. A charity using a sophisticated case management system still has to extract data in different formats for different funders. The technology solution that would make the biggest difference — a shared data standard for impact reporting, analogous to what 360Giving has achieved for grant data — does not yet exist.
Key sources and further reading
"Principles of Good Impact Reporting" — Inspiring Impact / NPC, various years. Guidance on proportionate, useful impact measurement for charities, including practical tools and shared measurement frameworks.
Open and Trusting Grant-making — IVAR, ongoing. Framework for better funder practice, including commitments to proportionate reporting, lighter-touch monitoring, and trust-based approaches to evidence. Over 170 funders signed up.
"The Little Blue Book" and Theory of Change guidance — NPC, various years. Foundational resources on how charities can articulate their theory of change and develop proportionate measurement approaches.
"Being Realistic About Measurement" — Centre for Youth Impact, various years. Research and advocacy on learning-centred measurement in the youth sector, including evidence that much current data collection serves compliance rather than learning.
UK Civil Society Almanac — NCVO, annual. The definitive dataset on the UK voluntary sector, including data on organisational capacity and the administrative burden on smaller charities.
What Works Network — UK Government / various centres, ongoing. The infrastructure of evidence centres across policy domains, including education (EEF), crime reduction, ageing, and early intervention. Demonstrates both the value and the limits of outcomes-based approaches.
"Why Restrict Grants?" Evidence Review — IVAR, March 2023. Documents the broader burden of funder-imposed requirements on charities, including the measurement and reporting demands that accompany restricted funding.
The Road Ahead 2025 — NCVO, 2025. Annual sector outlook describing the financial pressures that make measurement burden more damaging, including the employer NIC increase and tightening funding environment.
Public Trust in Charities 2025 — Charity Commission for England and Wales, 2025. Research on public expectations of charity accountability, relevant to the tension between proportionate measurement and the demand for demonstrable impact.