Outcome Measurement for Youth Charities and Services
How youth charities and services measure outcomes effectively, including common frameworks, specific KPIs, tools, and funder expectations for youth programmes in the UK.
Measuring outcomes in youth work is uniquely challenging because the changes you are trying to capture — confidence, resilience, readiness for employment — are deeply personal, develop over time, and resist simple quantification. Yet funders increasingly expect youth charities to demonstrate measurable impact, and the organisations that do this well secure more funding, design better programmes, and serve young people more effectively.
TL;DR: Youth charities in the UK typically measure outcomes across four domains: employability, educational attainment, wellbeing, and social development. The Youth Endowment Fund and Outcomes Star frameworks are the most widely adopted. Effective measurement combines validated tools (such as WEMWBS or the Rickter Scale) with routine data capture built into programme delivery. Funders increasingly require quantitative outcome data alongside narrative reporting, making structured measurement essential rather than optional.
What Makes Youth Outcome Measurement Different?
Youth outcome measurement differs from adult services because the changes being tracked are developmental, highly variable between individuals, and influenced by factors outside the programme's control. A young person's progress is shaped by family circumstances, school environment, peer relationships, and neurological development — none of which your programme controls, but all of which affect the outcomes you are trying to measure.
A significant proportion of the 170,862 registered charities in England and Wales (Charity Commission, March 2025) work directly with children and young people. These organisations collectively reach millions of young people each year, yet the sector has historically struggled with consistent outcome measurement.
The specific challenges include:
- Developmental variability — a 13-year-old and a 19-year-old respond to the same intervention very differently.
- Attribution difficulty — when a young person improves their school attendance, how much is down to your mentoring programme versus a new teacher or a change in home circumstances?
- Engagement barriers — young people are often reluctant to complete surveys or self-assessments, especially if they feel like tests.
- Ethical sensitivity — measuring outcomes for vulnerable young people requires careful consent processes and age-appropriate methods.
- Long time horizons — the most meaningful outcomes of youth work (sustained employment, stable housing, positive relationships) may not materialise until years after the programme ends.
The best youth outcome measurement does not feel like measurement to the young person. It feels like a conversation about how they are doing. The data should be a by-product of good practice, not a separate burden layered on top.
Which Frameworks Are Used for Youth Outcome Measurement?
The Youth Endowment Fund (YEF) toolkit and the Outcomes Star (Young Person variant) are the two most widely adopted frameworks in the UK youth sector. Other common frameworks include the Theory of Change approach, the Rickter Scale, and the Warwick-Edinburgh Mental Wellbeing Scale (WEMWBS).
Definition: Youth Outcome Measurement
Youth outcome measurement is the systematic process of defining, collecting, and analysing data to determine what changes occur for young people as a result of participating in a programme or service. It encompasses both quantitative metrics (such as NEET reduction rates or wellbeing scores) and qualitative evidence (such as case studies and young people's self-reported experiences). Effective youth outcome measurement distinguishes between outputs (what was delivered), outcomes (what changed), and impact (the long-term difference made).
Youth Endowment Fund (YEF) Toolkit
The YEF Toolkit is an evidence-based resource that synthesises research on what works to prevent youth violence and improve outcomes for young people. It rates interventions by their evidence strength and provides standardised outcome measures. Youth charities working in crime prevention, early intervention, or anti-social behaviour increasingly use YEF-aligned metrics to demonstrate evidence-based practice.
Outcomes Star — Young Person Variant
The Outcomes Star is a suite of evidence-based tools developed by Triangle Consulting Social Enterprise. The Young Person variant covers seven domains: accommodation, living skills, mental health and wellbeing, friends and community, education and learning, a positive use of time, and a sense of the future. Over 4,000 organisations use Outcomes Star tools across the UK, making it one of the most recognised frameworks in the sector.
WEMWBS (Warwick-Edinburgh Mental Wellbeing Scale)
WEMWBS is a validated 14-item scale (or 7-item short version, SWEMWBS) that measures mental wellbeing. It is widely used across NHS, local authority, and charity settings and is particularly valuable because it provides a standardised, comparable measure that funders recognise. Research from the University of Warwick indicates it is valid for use with young people aged 13 and above.
The Rickter Scale
The Rickter Scale is an interactive assessment tool that uses a visual, interview-based approach rather than written questionnaires. It is particularly effective with young people who have low literacy, learning difficulties, or disengagement from formal assessment methods. It measures distance travelled across customisable life domains and produces quantitative data from a qualitative process.
What KPIs Should Youth Charities Track?
The specific KPIs depend on your programme's objectives, but most youth funders expect data across four core domains: employability, educational attainment, wellbeing, and social development. The table below maps common KPIs to each domain.
| Domain | Common KPIs | Measurement method |
|---|---|---|
| Employability | NEET reduction rate; job starts; sustained employment (6+ months); apprenticeship completions; interview skills score | Destination tracking; employer confirmation; self-report follow-up surveys |
| Educational attainment | School attendance rate; qualification completions; grade improvements; exclusion reduction | School data sharing; certificate verification; attendance registers |
| Wellbeing | WEMWBS score change; self-reported confidence; anxiety reduction; emotional resilience | Pre/post validated scales; regular check-ins; Outcomes Star assessments |
| Social development | Positive peer relationships; community participation; reduced offending; pro-social behaviour | Self-assessment tools; practitioner observation; case notes; partner feedback |
Reducing the number of young people not in education, employment, or training (NEET) remains a key policy priority. According to the ONS, an estimated 946,000 young people aged 16–24 were NEET in the UK in July to September 2025 — 12.7% of the age group and the highest proportion since 2014. Youth charities working in this space typically track NEET status at programme entry, exit, and at three, six, and twelve months post-programme.
For wellbeing programmes, the standard expectation is a statistically significant improvement in WEMWBS scores between programme start and end. A change of three or more points on the 14-item scale is generally considered meaningful. Research suggests that effective programmes see average improvements of 3-5 points.
Sports and activity-based youth programmes often track attendance frequency (regular attendance being a proxy for engagement), alongside self-reported outcomes on confidence, teamwork, and physical health. The Sport England outcomes framework provides a useful starting point for these programmes.
How Do You Collect Youth Outcome Data in Practice?
The most effective approach embeds data collection into programme delivery rather than treating it as a separate administrative task. Methods should be age-appropriate, engaging, and proportionate to the programme's scale.
The Charity Digital Skills Report 2025 found that 68% of small charities remain in the early stages of digital adoption, with 67% citing squeezed finances as the top barrier to progress. As a result, many youth organisations still rely on paper-based methods. However, digital tools significantly improve data quality and reduce the burden on both staff and young people.
Practical collection methods
Pre and post assessments — administer a validated tool (WEMWBS, Outcomes Star, or a bespoke questionnaire) at programme start and end. Keep assessments short — 10 minutes maximum for young people under 16.
Session-level data capture — record attendance, participation, and brief observations after each session. This is where Plinth's case management adds particular value, allowing youth workers to log session notes that automatically feed into outcome dashboards.
Milestone tracking — define key milestones for each young person (completed a CV, attended a job interview, achieved a qualification) and record when they are reached. Milestones provide concrete evidence of progress that funders find compelling.
Follow-up surveys — contact young people at intervals after programme completion to track sustained outcomes. Response rates for follow-up surveys in the youth sector typically range from 25-40%, so plan for lower sample sizes and consider incentives.
Participatory methods — involve young people in defining and assessing their own outcomes. Techniques like Most Significant Change, creative journals, and video diaries capture rich qualitative data that brings numbers to life.
Plinth's Surveys feature lets you build age-appropriate questionnaires, distribute them digitally, and have results flow automatically into your outcome dashboards — reducing the gap between data collection and reporting.
What Do Funders Expect from Youth Outcome Data?
Funders expect youth charities to demonstrate a clear link between their activities and measurable changes for young people, supported by a credible methodology and honest reporting of both successes and challenges.
The expectations have shifted significantly over the past decade. The National Lottery Community Fund, which distributes £600 million or more annually and is one of the largest funders of youth work in the UK, now explicitly requires applicants to describe their outcomes framework and measurement approach in grant applications. Most major funders now expect quantitative outcome data alongside narrative reporting.
Specifically, funders look for:
- A Theory of Change — a clear logic model showing how your activities lead to the outcomes you are claiming.
- Validated tools — use of recognised measurement instruments rather than bespoke, unvalidated questionnaires.
- Baseline data — measurements taken before or at the start of the programme, so change can be demonstrated.
- Proportionate sample sizes — not every participant needs to complete every assessment, but your sample should be large enough to be credible.
- Honest reporting — acknowledging where outcomes were not achieved and explaining why. Funders are more suspicious of perfect results than of honest mixed findings.
- Sustained outcomes — evidence that change persists beyond programme completion, particularly for employability and NEET reduction claims.
Plinth's Impact Reporting generates funder-ready reports that pull outcome data directly from your programme records, presenting it in the format funders expect — with baseline comparisons, aggregated scores, and narrative context.
How Do You Handle Attribution in Youth Work?
You acknowledge it transparently and focus on contribution rather than attribution. No youth programme operates in isolation, and funders increasingly accept contribution claims supported by credible evidence rather than demanding proof of sole causation.
Attribution — proving that your programme caused a specific outcome — is the most intellectually honest challenge in youth outcome measurement. When a young person moves from NEET status to employment, multiple factors contributed: your programme, their own motivation, their family, their school, the local job market.
Practical approaches include:
- Counterfactual thinking — compare your participants' outcomes with similar young people who did not access the programme, using local or national benchmark data.
- Dose-response analysis — show that young people who engaged more deeply (attended more sessions, completed more milestones) had better outcomes, suggesting a causal relationship.
- Qualitative attribution — ask young people directly what made the difference. Their testimony, while not statistically rigorous, is powerful evidence of contribution.
- Theory of Change alignment — if your outcomes match the predicted pathway in your Theory of Change, this strengthens the contribution claim.
Frequently Asked Questions
What is the minimum age for using validated outcome tools?
WEMWBS is validated for ages 13 and above. The Outcomes Star Young Person variant is designed for ages 16-25, though adapted versions exist for younger age groups. The Rickter Scale can be used with young people of any age with appropriate facilitation. For children under 13, consider the Stirling Children's Wellbeing Scale or the Strengths and Difficulties Questionnaire (SDQ), both of which are validated for younger age groups.
How do we measure outcomes for short programmes?
For programmes lasting less than eight weeks, focus on immediate outcomes (knowledge gained, skills demonstrated, confidence change) rather than sustained outcomes. Use pre and post assessments with a short, validated tool. Be honest with funders about what can realistically be measured in a short timeframe — claiming long-term outcomes from a four-session intervention is not credible.
What if young people refuse to complete assessments?
Make assessments as engaging and non-threatening as possible. Use visual or conversational tools rather than written questionnaires. Explain clearly why you are collecting the data and how it will be used. Offer choice where possible — let young people choose between a paper form, a digital survey, or a conversation with their youth worker. Never make participation in assessment a condition of accessing the programme.
How do we report negative or mixed outcomes?
Report them honestly, with context and learning. A programme where 60% of participants improved and 40% did not is more credible than one claiming 100% success. Explain what you learned from the participants who did not achieve outcomes, and describe how you are adapting the programme in response. Funders value learning and improvement over perfection.
Can we use AI to help with outcome measurement?
Yes, increasingly so. AI tools can analyse qualitative data (case notes, feedback forms, open-ended survey responses) to identify themes and patterns at scale. Plinth's AI features can summarise case notes and surface trends across your caseload, helping you spot outcome patterns that manual analysis would miss. AI is best used as an analytical tool supporting human interpretation, not as a replacement for validated measurement instruments.
Recommended Next Pages
- What Is Outcome Measurement? — foundational concepts and principles for any charity measuring outcomes.
- How to Design Outcome Surveys — practical guidance on building effective questionnaires.
- Software for Youth Charities — platform comparison for organisations working with young people.
- Charity KPI Examples — a broader library of KPIs across different charity types.
- Measuring Grant Impact — strategies for demonstrating impact to funders.
Last updated: February 2026
Measuring outcomes for your youth programme? Book a demo or contact our team.