Measuring Outcomes in Case Management

How charities and local authority teams can measure and evidence the outcomes of their case management work. Practical guidance on outcome frameworks, data collection, and reporting to funders.

By Plinth Team

Measuring Case Outcomes - An illustration showing outcome data flowing from case records into reports and funder evidence

Measuring outcomes is one of the most important and most underinvested aspects of case management practice. The organisations that do it well can demonstrate impact compellingly, retain funding, and continuously improve their services. Those that struggle with it often find themselves spending days compiling numbers before reporting deadlines — and still unable to tell a convincing story about what changed for the people they supported.

What you'll learn: The difference between outputs and outcomes, how to design an outcome measurement approach that fits your service, and how to build measurement into your day-to-day case management practice rather than treating it as a separate exercise.

Why it matters now: Funders, commissioners, and local authorities are increasingly sophisticated in what they ask for. Outputs (how many people you served) are no longer sufficient. They want to know what changed for those people — and they want reliable evidence, not anecdotes.

Plinth's approach: Plinth's case management tools are designed to make outcome recording a natural part of case practice, not a burden added on top of it.

Outputs vs Outcomes: The Fundamental Distinction

Many organisations conflate outputs and outcomes. Understanding the difference is essential.

Outputs are the activities your organisation delivers: number of people supported, number of sessions delivered, number of referrals made. They describe what you did.

Outcomes are the changes that resulted from your work: a family moves into stable housing, a young person returns to school, an individual reduces their dependency on crisis services. They describe what changed.

Why the Distinction Matters: Funders fund outcomes, not outputs. An organisation that supported 200 people but cannot say what changed for them is much harder to fund than one that supported 80 people and can demonstrate specific, measurable improvements in their lives.

The shift from output to outcome measurement is a cultural and systems change, not just a reporting change.

Designing an Outcome Framework

An outcome framework is the structured approach to defining, collecting, and reporting outcomes. Getting this right at the design stage saves enormous effort later.

Start With Your Theory of Change

What is your service trying to achieve, for whom, and how?

Beneficiary Group: Who are you working with? What are their presenting circumstances?

Intended Changes: What do you intend to change for them? Be specific — "improved wellbeing" is too vague; "reduced anxiety scores as measured by GAD-7" is measurable.

Mechanisms: How does your service produce those changes? The logic of how your activities lead to outcomes is your theory of change.

Timeframe: When do you expect to see the outcomes? Some outcomes are achievable within a three-month intervention; others emerge over years.

A clear theory of change makes every subsequent measurement decision much easier.

Choose the Right Outcome Domains

Outcome domains are the areas of life in which you are seeking to make a difference.

Common Domains for Charity and Prevention Services:

  • Housing stability
  • Mental health and wellbeing
  • Employment and financial security
  • Family relationships
  • Physical health
  • Education and skills
  • Community connection and social networks
  • Ability to manage independently

Focus and Depth: Trying to measure everything leads to measuring nothing well. Choose the three or four domains most central to your service and measure them properly.

Standardised Tools: Where possible, use standardised validated tools within each domain (such as the Warwick-Edinburgh Mental Wellbeing Scale, Outcomes Star, or PHQ-9 for mental health) so your data is comparable and credible.

Define Data Collection Points

Decide when outcome data will be collected and who is responsible for collecting it.

At Case Opening (Baseline): Recording the service user's situation at the point of entry gives you the baseline against which change will be measured. Without this, you cannot demonstrate improvement.

During Support (Progress): Regular check-ins against outcome domains show direction of travel and help workers identify when progress has stalled.

At Case Closure (Outcome): Closing a case should always include recording the service user's situation at exit, compared against the baseline.

Follow-Up (Long-Term Impact): Where feasible, follow-up contacts 3–6 months after closure can evidence whether change was sustained.

Plinth's case status and lifecycle management supports structured recording at each of these points.

Building Outcome Measurement into Case Practice

The most reliable way to ensure good outcome data is to make measurement part of normal case practice, not a separate administrative exercise.

Integrate Into Case Notes

Workers should be able to record outcome-relevant information within their normal case notes rather than completing separate forms.

Natural Recording: A worker recording "client reported feeling more in control of finances since benefits claim was resolved" is recording an outcome indicator. The system should make it easy to surface and categorise these observations.

Structured Fields: Where specific outcome measures are required (such as validated scales), these can be structured as fields within the case management system rather than separate paper forms.

Minimal Burden: The more outcome recording is integrated into normal workflow, the better the data quality. If outcome measurement feels like an additional burden, compliance will be poor.

Case Opening and Closure Quality

The most important moments for outcome recording are case opening (baseline) and closure (outcome).

Opening Assessment: A structured opening assessment that records the presenting situation across your outcome domains takes 10–15 minutes but is the foundation of all subsequent outcome evidence.

Closure Review: At closure, workers should record what changed since opening, the primary reason for closure, and the service user's situation at exit. This takes a few minutes and generates the core of your impact evidence.

Plinth's case management features support structured recording at both opening and closure, making this a natural part of workflow rather than an afterthought.

Use AI to Surface Outcome Evidence

Plinth's AI analysis tools can help workers and managers identify outcome evidence that is already present in case notes but not yet surfaced.

Pattern Identification: AI analysis can identify recurring themes across case notes — improvements in presentation, changes in language, specific achievements mentioned — that represent outcome evidence.

Case Narrative Generation: AI-generated case summaries can help workers and managers construct outcome narratives for reporting purposes efficiently.

Quality Review: Before closing a case, an AI review of the full case history can identify outcome evidence that should be included in the closure record.

Reporting Outcomes to Funders

Good data is only valuable if it is communicated well.

What Funders Want to See

Most funders are looking for three things:

Numbers: How many people did you support? How many achieved the intended outcomes? What proportion? These numbers need to be accurate and consistent with your case management records.

Stories: What changed for specific individuals? Case studies that bring outcomes to life — with appropriate anonymisation — are often more compelling than statistics alone.

Attribution: Can you demonstrate a plausible link between your service and the outcomes? You don't need a randomised controlled trial, but you need a coherent account of why your service caused the change.

Structuring Your Report

Lead With Outcomes: Put the headline outcome data at the front, not the activities. "85% of families we supported achieved stable housing within three months" is more compelling than "we delivered 420 support sessions."

Compare to Baseline: Show the change. "At case opening, 40% of our service users had stable housing; at case closure, 85% did" is powerful evidence.

Include Complexity: Acknowledge the challenges as well as the successes. Funders trust organisations that are honest about what was hard.

Service User Voice: Where possible, include quotes or case studies that represent the service user's experience of change in their own words.

Common Outcome Measurement Challenges

Data Quality Problems

Inconsistent Recording: If different workers record opening assessments differently, your data is not comparable. Invest in training and use standardised tools.

Missing Closures: Cases that drift to closure without a proper closure record are the most common gap in outcome data. Make closure recording a non-negotiable part of the case management process.

Survivor Bias: Organisations often report better outcomes than they actually achieve because they are better at recording closures for cases that ended well. Track all closures, including those where contact was lost or outcomes were not achieved.

Attribution Challenges

Confounding Factors: Change in service users' lives is rarely caused entirely by your service. Be honest about the role other factors play.

Contribution vs Attribution: Good reporting frames your role as a contribution to change, not sole cause. "Our support contributed to..." is more honest and more credible than "our service caused..."

Plinth's AI tools can help construct nuanced attribution narratives by drawing on the full case history.

Frequently Asked Questions

What outcome frameworks work best for charity case management?

This depends on your service area and funder requirements.

Widely Used Frameworks: Outcomes Star (suitable for a range of service types), Warwick-Edinburgh Mental Wellbeing Scale (wellbeing), PHQ-9 and GAD-7 (depression and anxiety), Housing Outcomes Framework.

Local Authority Frameworks: Teams commissioned by local authorities may need to use the council's outcome framework, which varies by borough.

Custom Frameworks: Many organisations develop their own outcome frameworks based on their theory of change. Plinth's recording structure can accommodate any framework.

How do we measure outcomes for hard-to-reach service users?

Hard-to-reach populations often have the most complex needs and the hardest-to-measure outcomes.

Lower Threshold Indicators: For very hard-to-reach groups, outcome indicators may need to be proximate measures — increased engagement, reduced crisis contacts — rather than direct measures of change.

Longitudinal Tracking: For populations with long support journeys, outcome measurement needs to be longitudinal, not just point-in-time.

Qualitative Evidence: For populations where quantitative measurement is difficult, rich qualitative case records can provide compelling evidence of impact.

Can outcome data help us improve our service?

Absolutely — this is one of the most valuable uses of outcome data.

Identifying What Works: Analysis of which interventions, workers, or pathways are associated with better outcomes helps organisations learn and improve.

Identifying Gaps: Outcome data can reveal which service user groups are not achieving the expected outcomes — pointing to gaps in the service offer.

Informing Service Design: Regular outcome review should be part of service improvement cycles, not just funder reporting.

Recommended Next Pages


Last updated: February 2026

To learn how Plinth supports outcome measurement and impact reporting, book a demo or contact our team.