Case Studies vs Metrics: What Impact Reporting Really Needs
The tension between qualitative case studies and quantitative metrics in impact reporting — when to use each, how to combine them, and what funders actually want.
A single powerful case study can convey something that no spreadsheet ever will — the texture of a person's struggle, the moment something changed, the human reality behind the numbers. A well-constructed set of metrics can convey something a case study never can — the scope of change across a whole population, the consistency of outcomes over time, the credibility that comes from systematic evidence.
Impact reporting needs both. The tension between case studies and metrics is real, but it is not a choice. Funders who ask for numbers alone are missing the story that makes the numbers matter. Charities that lead only with stories are leaving themselves vulnerable to the perfectly reasonable question: "But how many people did this actually affect?"
Getting the balance right is not a writing exercise — it is an evidence strategy. It requires understanding what each type of evidence does well, where each type fails, what different funders actually want (which varies more than most guidance acknowledges), and how to build data collection systems that generate both types of evidence without overwhelming frontline teams.
What you will learn:
- What metrics and case studies each do well — and where each type of evidence breaks down
- What the research says about what funders actually want
- How to structure an impact report that uses both types of evidence effectively
- Practical systems for collecting case study material and outcome data simultaneously
- When a single case study is worth more than a page of numbers
Who this is for: Charity managers, communications leads, monitoring and evaluation officers, and anyone responsible for producing impact reports for funders, trustees, or the public.
The Case for Metrics: Scale, Consistency, Credibility
Metrics are the backbone of impact reporting because they answer the most fundamental questions a funder can ask: how many people did you reach, and did outcomes improve? Without metrics, there is no way to know whether the case study you have chosen is representative or exceptional.
The value of metrics lies in four specific properties.
Scale. A case study shows one person's story. Metrics show how many people experienced something similar. If you supported 847 people through a mental health programme and 71% reported improved wellbeing at six months, that is a claim about 847 people. A case study is a claim about one.
Comparability. Consistent metrics allow comparison across cohorts, delivery sites, time periods, and — increasingly, as sector-wide frameworks develop — across organisations. This comparability is what allows funders to make evidence-based decisions about where to invest.
Accountability. Metrics are harder to cherry-pick than case studies. If you commit to measuring a specific outcome with a validated tool and report the results — including for beneficiaries who did not improve — that is a form of accountability that a curated case study cannot provide.
Credibility with sceptical audiences. Trustees, statutory funders, and evidence-focused foundations often approach qualitative evidence with scepticism. Metrics, particularly those derived from validated measurement tools, signal methodological rigour.
However, metrics have well-documented weaknesses. Most significantly, metrics measure what is measurable — which is not always the same as what matters. The Charity Awards have noted that the charity sector often defaults to counting things that are easy to count (sessions delivered, people reached) rather than measuring outcomes that are harder to track but more meaningful (sustained behaviour change, improved relationships, increased confidence) (Charity Awards, 2019). A metric that is easy to collect but loosely connected to your theory of change is worse than no metric at all, because it creates a false sense of rigour.
The Case for Case Studies: Depth, Meaning, and Human Reality
A well-constructed case study does something no metric can do: it explains the mechanism of change. It answers the question that sits behind every set of outcome data — "how does this actually work for real people?"
Consider a metric showing that 68% of programme participants reported improved housing stability at six months. That is a meaningful finding. But it does not explain whether stability came from practical support navigating the housing system, from developing coping skills that reduced eviction risk, from the relationships built with key workers, or from something else entirely. A case study that traces one person's journey through the programme — the specific conversations that mattered, the moment they understood their tenancy rights, the week a support worker stayed late to help them respond to a council letter — explains the mechanism in a way the metric cannot.
This matters not just for storytelling but for learning. Funders and charities who understand the mechanism of change are in a far better position to replicate it, adapt it for different populations, and identify which elements of a programme are doing the most work.
Case studies also serve a critical function in capturing outcomes that cannot be measured with scales and surveys. Social connection, dignity, confidence, a sense of belonging — these are real outcomes for many beneficiaries, and they resist metric form. A case study is often the most honest way to evidence them.
The risks of relying on case studies alone are equally real. Case studies are selected, which creates a strong bias towards positive stories. LSE researcher Julia Morley studied 128 websites of charities and social purpose organisations and found that impact reports appeared to function primarily as reputation management tools rather than genuine evidence of effectiveness — and that case studies were a key mechanism in this (Morley, 2016). When charities choose their most inspiring stories and present them as evidence without any quantitative context, they are not misrepresenting any individual's experience, but they are misrepresenting the overall picture.
What Funders Actually Want: The Research
The gap between what funders say they want and what they actually reward with funding is a long-standing tension in the sector — but the evidence on funder preferences has become clearer in recent years.
Esmée Fairbairn Foundation, one of the UK's largest independent funders, asks grantees explicitly to think about what mix of qualitative and quantitative analysis could help tell the story of their goals, the people or places they hoped to support, and how their work has created change. They explicitly request both, with neither treated as optional.
IVAR's Better Reporting principles — developed in partnership with both funders and grantees — are explicit that funders want to understand what you hoped to change, what actually happened, whether it went to plan, and what was learned. Crucially, IVAR notes that funders do not expect perfect outcomes and that honest reflection is valued over forced success stories. More than 1,200 charities who have engaged with IVAR's open and trusting framework report that this approach enables them to deliver better for the communities they serve (IVAR, 2024).
The broader research picture confirms this. Best practice guidance from across the sector consistently identifies the essential components of strong impact reporting as: quantitative metrics aligned with a theory of change, qualitative evidence from stakeholder voices, and analysis that explains patterns across both data types. Not one or the other — all three.
Where funder preferences genuinely vary is in sophistication and emphasis. Evidence-focused foundations with dedicated evaluation teams tend to place higher weight on methodological rigour — randomised control trials, validated measurement tools, independent evaluation. Community-focused funders and those committed to participatory grantmaking tend to place higher weight on beneficiary voice and qualitative evidence of lived experience. Understanding where your funder sits on this spectrum is important when making decisions about how to weight your reporting.
The Output/Outcome Trap: Why Many Impact Reports Fail Both Tests
Many impact reports fail to satisfy funders on either metrics or case studies because they are reporting outputs rather than outcomes — and this undermines both types of evidence.
Outputs are what you do: the number of sessions delivered, the number of people who attended, the number of meals provided, the number of training hours completed. Outcomes are what changes as a result: improved wellbeing, sustained employment, stronger family relationships, reduced social isolation.
The distinction matters for case studies as much as metrics. A case study that describes how a person attended twelve sessions of a financial capability programme, completed all their homework, and was rated "highly engaged" by their support worker is an output story. A case study that describes how that same person went on to clear a debt they had carried for six years, negotiate a repayment plan with their landlord, and, for the first time, feel in control of their money — that is an outcome story.
The National Council of Nonprofits notes that while funders increasingly emphasise outcomes, most organisations continue to report primarily on outputs rather than the changes those outputs produce. The fix requires not just different data collection but a different framing question: instead of "what did we do?", ask "what changed, and for whom?"
Comparison: When to Use Case Studies vs When to Use Metrics
| Reporting Need | Case Studies | Metrics |
|---|---|---|
| Showing scale of reach | Weak | Strong |
| Explaining mechanism of change | Strong | Weak |
| Demonstrating consistency of outcomes | Weak | Strong |
| Capturing hard-to-measure outcomes | Strong | Weak |
| Building credibility with sceptical audiences | Moderate | Strong |
| Making the data human and relatable | Strong | Weak |
| Comparing outcomes across cohorts or sites | Very weak | Strong |
| Evidencing outcomes for individual beneficiaries | Strong | Weak |
| Supporting learning and programme improvement | Strong | Moderate |
| Meeting statutory reporting requirements | Insufficient alone | Usually required |
The table above is not a guide to choosing one over the other — it is a guide to what each does best within a report that uses both.
Structuring an Impact Report That Uses Both Effectively
The most effective impact reports do not separate qualitative and quantitative evidence — they weave them together so that each type of evidence illuminates the other.
A strong structure works as follows.
Open with the outcome picture (metrics). Start with the headline numbers: how many people you worked with, what proportion experienced meaningful change on your primary outcome measure, and how that compares with your original targets. This establishes the scale and credibility of what follows.
Explain the mechanism with a case study. Immediately after the headline metrics, introduce a case study that explains how the programme works for a real person. The case study should be selected to be representative — not the most exceptional outcome, but one that reflects the typical journey — and should be explicitly framed as such.
Use metrics to show consistency. Break down your outcome data by relevant sub-groups: by gender, age, referral pathway, delivery site, or whatever dimensions are relevant to your programme. This shows that positive outcomes are not the exception but the pattern.
Use case studies to explain variation. If your metrics show that outcomes vary by sub-group or delivery site, a targeted case study or qualitative analysis can explain why. This turns a potentially awkward finding into a learning opportunity.
Close with honest reflection. Good impact reports include something that did not work as expected and what was learned from it. This is more valuable to a funder than a report that reads as entirely positive — because it signals that your measurement approach is genuinely capturing reality, not just selecting for success.
Collecting Both Types of Evidence Without Burning Out Your Team
The practical barrier to generating both metrics and case studies is time — specifically, the time of frontline staff who are already stretched. The solution is to build both into the same data collection touchpoints rather than treating them as separate tasks.
At the point of service exit or programme completion, a single conversation with a beneficiary can generate both types of evidence: a standardised outcome survey for the metrics, and a brief structured interview (or even a voice note) that captures the qualitative story. If the conversation is recorded (with consent) and transcribed using AI tools, the case study material can be produced from that transcript with minimal additional effort.
Platforms like Plinth are designed to support exactly this workflow — capturing structured outcome data and qualitative evidence within the same system, so that reporting does not require assembling data from multiple sources at the end of a grant period.
The Charity Digital Skills Report 2024 found that 31% of charities describe themselves as poor at collecting, managing, and using data — a persistent problem that has resisted conventional training-based solutions (Charity Digital Skills Report, 2024). The answer is not to train frontline staff to be better data collectors. It is to make the data collection so embedded in natural interactions that it does not feel like a separate task at all.
A Note on Consent and Anonymisation
Case studies involve real people, and the way you collect, store, and use their stories has real ethical and legal implications. Under UK GDPR, beneficiaries must give informed consent to the use of their personal information in reports and publications. This consent should be specific — what their story will be used for, who will see it, and how long it will be retained.
In practice, many charities collect case studies anonymously or with minimal identifying detail, using phrases like "a service user from our Leeds programme" rather than full names. This is often appropriate, particularly for beneficiaries who are in vulnerable situations, where identification could cause harm. Where beneficiaries are happy to be named and the context is appropriate, named case studies carry more credibility — but the decision should always rest with the individual, not the organisation.
FAQ
What proportion of an impact report should be case studies vs metrics?
There is no universal rule, but a rough guide is: lead with metrics (perhaps 40-50% of the evidential content), use case studies to illuminate and humanise the data (30-40%), and use qualitative analysis or funder feedback to round out the picture (10-20%). The right balance depends on your audience, programme type, and the strength of your data.
Can a case study ever substitute for outcome metrics?
In rare cases, yes — particularly for very small programmes or for outcomes that genuinely resist measurement. But for most charities and most funders, the answer is no. A case study alone cannot show that outcomes were consistent across the population served.
How many case studies does a strong impact report need?
Quality over quantity. Two or three carefully chosen, well-written case studies that reflect the range of beneficiaries you serve will be more compelling than ten brief anecdotes. Each case study should have a clear structure: the situation before, what the programme did, and what changed as a result.
What makes a case study "representative"?
A representative case study reflects the typical journey through your programme — not your best outcome, not your most dramatic transformation. Look at your outcome data first, identify a beneficiary who sits around the median improvement, and build your case study from that person's experience.
Do funders actually read case studies?
Research from IVAR suggests that funders want to understand what actually happened for real people — and case studies are the most natural vehicle for this. Programme officers under pressure do skip lengthy narrative sections, which is an argument for making case studies concise and compelling rather than an argument for omitting them.
How do I collect case studies without burdening frontline staff?
Embed collection into existing touchpoints: exit conversations, review meetings, or brief follow-up calls. Use voice recording (with consent) so that staff do not have to write anything down. AI transcription tools can convert voice recordings into draft case study text, which then needs only light editing. The key is making the collection as frictionless as possible.
What if a beneficiary's story has a mixed outcome?
Use it. A case study that describes real ambiguity — progress that was hard-won, setbacks along the way, an outcome that is positive but not transformative — is more credible than a story of uncomplicated success. Funders who are serious about evidence will appreciate the honesty.
Recommended Next Pages
- Using AI to Spot Patterns in Impact Data — How AI surfaces insights from your metrics that humans miss
- Common Pitfalls in Measuring Impact — The mistakes that undermine both your case studies and your metrics
- How to Collect Charity Case Studies — A practical guide to collecting case study evidence at scale
- Charity Impact Report Guide — End-to-end guidance on producing a compelling annual impact report
- What Evidence Do Funders Require? — Understanding what specific evidence types different funders expect
Last updated: February 2026