How to Design Effective Outcome Surveys for Funded Programmes

A practical guide to designing outcome surveys that capture meaningful data for funded charity programmes. Covers question design, validated scales, survey structure, and best practices for pre-and-post measurement.

By Plinth Team

Outcome Survey Design — A step-by-step visual guide showing the process of designing effective outcome surveys

Effective outcome surveys are the foundation of impact measurement for funded programmes — but poorly designed surveys produce unreliable data that undermines your funder reports and programme decisions. This guide provides a practical, step-by-step approach to designing surveys that capture meaningful outcome data while keeping completion rates high and participant burden low.

TL;DR: Start with your Theory of Change to identify what to measure. Use validated scales where possible (SWEMWBS, ONS4, Rosenberg). Keep surveys under 15 questions and 10 minutes, combining closed questions (70%) with open-ended questions (30%). Pilot with real participants before deployment, and use Plinth to build, distribute, and analyse responses with AI.

Who this is for: Impact officers, programme managers, and charity leaders designing surveys to measure outcomes.


Step 1: Start with Your Outcomes, Not Your Questions

The most common mistake in survey design is starting by writing questions. Instead, start with the outcomes your programme is designed to achieve.

Map your questions to your Theory of Change:

  1. Identify the 3-5 key outcomes from your Theory of Change or logic model
  2. For each outcome, determine what change looks like and how you would recognise it
  3. Then — and only then — write or select questions that measure that specific change

Example mapping:

Programme OutcomeObservable ChangeSurvey Question Approach
Improved mental wellbeingHigher self-reported wellbeing scoresSWEMWBS validated scale (7 items)
Increased confidenceParticipants feel more capableLikert scale: "I feel confident in my ability to..."
Better financial managementChanged behaviours around moneyMultiple choice: frequency of budgeting, saving
Stronger social connectionsMore regular contact with othersLikert scale + open question about relationships

Research from the What Works Centre for Wellbeing shows that charities using outcome-mapped survey design collect 60% more relevant data than those using generic satisfaction surveys, while asking 30% fewer questions.


Step 2: Choose Between Validated Scales and Custom Questions

Validated Scales

Validated scales are pre-tested questionnaires with established reliability and normative data. They are the gold standard for outcome measurement because they allow comparison with national benchmarks and other organisations.

Commonly used validated scales for UK charities:

Short Warwick-Edinburgh Mental Wellbeing Scale (SWEMWBS) — 7 items measuring positive mental wellbeing. Free to use. Validated for ages 13+. The most widely used wellbeing measure in the UK charity sector, with over 5,000 organisations using it. Recommended by the National Lottery Community Fund and the Office for Health Improvement and Disparities (OHID).

ONS4 Personal Wellbeing Questions — 4 items covering life satisfaction, worthwhileness, happiness, and anxiety. Used in the UK national wellbeing survey, enabling comparison with population-level data. Free and widely recognised by funders.

Rosenberg Self-Esteem Scale — 10 items measuring global self-esteem. Extensively validated across populations. Particularly useful for programmes targeting confidence and self-worth.

Patient Health Questionnaire (PHQ-9) — 9 items screening for depression severity. Clinically validated and widely used in health-related programmes. Requires appropriate safeguarding protocols if scores indicate clinical levels of depression.

General Self-Efficacy Scale (GSE) — 10 items measuring belief in one's ability to handle challenges. Validated in over 30 countries. Useful for employability, skills, and empowerment programmes.

Custom Questions

Custom questions are necessary when validated scales do not cover your specific programme outcomes. The key is to follow good survey design principles.

Rules for writing effective custom outcome questions:

  • Ask about change, not satisfaction. "Has your confidence in managing money changed since starting this programme?" is an outcome question. "How satisfied are you with this programme?" is a satisfaction question — useful but different.
  • Be specific and behavioural. "How often do you set a weekly budget?" measures behaviour. "Do you feel better about money?" is too vague to measure reliably.
  • Use consistent scales. If using Likert scales, keep the same number of points (5 or 7) and the same direction (1 = strongly disagree, 5 = strongly agree) throughout. Research shows 5-point scales provide optimal reliability for most charity contexts.
  • Avoid double-barrelled questions. "Has the programme improved your confidence and motivation?" asks about two things. Split into separate questions.
  • Avoid leading questions. "How much has this excellent programme helped you?" assumes a positive answer. Use neutral framing: "To what extent, if at all, has the programme helped you?"

Step 3: Structure Your Survey for Maximum Completion

Survey structure directly affects completion rates. A study published in the International Journal of Social Research Methodology found that survey structure accounts for up to 25% of the variance in completion rates, independent of survey length.

Opening Section

Start with easy, non-threatening questions that build momentum. Demographic questions or simple factual questions work well here. Do not start with sensitive or complex items.

Example opening:

  • How long have you been attending the programme?
  • Which sessions have you attended? (checklist)

Core Outcome Section

Place your validated scales and key outcome questions in the middle of the survey, when participants are engaged but not yet fatigued. Group related questions together with clear section headings.

Example structure for a wellbeing programme:

  • Section 1: About you (2-3 demographic questions)
  • Section 2: Your wellbeing (SWEMWBS — 7 items)
  • Section 3: Your confidence (3-4 custom Likert items)
  • Section 4: Your experience (2-3 open-ended questions)

Closing Section

End with open-ended questions that give participants a voice. These are the questions that generate the rich qualitative data Plinth's AI analysis can process. They also leave participants feeling heard.

Effective closing questions:

  • "What, if anything, has changed for you as a result of this programme?"
  • "Is there anything else you would like to tell us about your experience?"
  • "What could we do differently to better support you?"

Step 4: Design for Pre-and-Post Comparison

Outcome measurement typically requires comparing a participant's responses at two or more time points. This pre-and-post design is the most practical approach for demonstrating distance travelled.

Baseline (Pre) Survey Design

Timing: Distribute at or before the first session. Completion rates for baseline surveys are highest when participants complete them as part of their registration or induction process.

Content: Include all outcome measures plus demographic information. The baseline survey can be slightly longer than the endpoint survey (up to 20 questions) because participants are motivated and curious at the start.

Framing: Frame questions in the present tense: "Right now, I feel confident in my ability to manage my finances" (1-5 scale).

Endpoint (Post) Survey Design

Timing: Distribute at or near the final session. Avoid waiting until after the programme ends — response rates drop by 40-60% once participants are no longer attending.

Content: Include the same outcome measures as the baseline (using identical wording and scales) plus programme feedback questions. Keep it under 15 questions to maximise completion.

Framing: Use identical framing to the baseline for outcome questions. Add retrospective questions only as supplements: "Compared to before the programme, my confidence in managing my finances is..." (much lower / lower / about the same / higher / much higher).

Matching Pre and Post Responses

This is where survey platform choice matters enormously. With Google Forms, matching requires manual data work. With Plinth, responses are automatically linked to participant records, enabling instant pre-and-post comparison without manual matching.

A common problem: 30-50% of participants who complete a baseline survey do not complete the endpoint survey. Plan for this attrition by collecting baselines from everyone and actively encouraging endpoint completion. Tools that link surveys to participant records (like Plinth) make it easier to identify who has and has not completed their endpoint survey.


Step 5: Optimise Question Types

Likert Scales

Likert scales are the workhorse of outcome measurement. Use them for attitudes, beliefs, confidence, and subjective assessments.

Best practice:

  • Use 5-point scales for simplicity or 7-point scales for greater sensitivity to change
  • Always label all points, not just the endpoints (e.g., Strongly disagree / Disagree / Neither / Agree / Strongly agree)
  • Include a midpoint (neutral option) — research shows that removing the midpoint forces inaccurate responses rather than producing more decisive data
  • Use positive framing where possible to avoid confusion with reverse-coded items

Multiple Choice

Use multiple choice for factual, behavioural, or categorical questions.

Best practice:

  • Ensure response options are mutually exclusive and exhaustive
  • Include "Other (please specify)" when the list might not cover all possibilities
  • Limit to 7-8 options maximum — beyond this, consider a dropdown or ranking question
  • Randomise option order when there is no natural sequence, to reduce order bias

Open-Ended Questions

Open-ended questions capture the nuance that closed questions miss. They are essential for qualitative outcome evidence and are particularly powerful when analysed using Plinth's AI tools.

Best practice:

  • Limit to 2-4 open-ended questions per survey (more creates fatigue)
  • Place them at the end of the survey
  • Ask specific questions rather than general ones — "What has changed for you?" produces richer data than "Any comments?"
  • Provide a text box large enough to encourage substantive responses (at least 3-4 lines)

Matrix Questions

Matrix questions present multiple items with the same response scale in a grid format. They are efficient for measuring multiple related outcomes.

Best practice:

  • Limit to 5-7 items per matrix to avoid visual overwhelm
  • Ensure all items genuinely share the same response scale
  • On mobile devices, matrix questions can be difficult to complete — test thoroughly

Step 6: Pilot and Refine

Never deploy an outcome survey without piloting it first. Testing with real participants catches problems that desk review cannot identify.

Pilot process:

  1. Internal review: Have 2-3 colleagues complete the survey and provide feedback on clarity, flow, and length
  2. Participant pilot: Ask 5-10 programme participants to complete the survey while you observe. Note any questions they hesitate on, ask about, or skip
  3. Timing test: Record how long the pilot takes. If it exceeds 10 minutes, cut questions. Research consistently shows that surveys over 10 minutes suffer significant completion rate drops — each additional minute beyond 10 reduces completion by approximately 5-10%
  4. Revise and finalise: Address issues identified in piloting before full deployment

Common Mistakes to Avoid

Asking too many questions. The optimal survey length for charity outcome measurement is 10-15 questions, completable in 5-8 minutes. Every additional question reduces completion rates without proportionally increasing data value.

Measuring satisfaction instead of outcomes. "How satisfied were you with the programme?" tells you about service quality, not outcomes. Include satisfaction questions if you wish, but do not confuse them with outcome measures.

Changing questions between pre and post surveys. Outcome comparison requires identical questions at both time points. If you change the wording, scale, or question order, you cannot validly compare the responses.

Ignoring mobile completion. Over 60% of charity survey respondents now complete surveys on mobile devices. Test your survey on a phone before deployment. Complex matrix questions and long scales can be particularly problematic on small screens.

Not explaining why. Participants complete surveys at higher rates when they understand why their data matters. Include a brief introduction explaining how the data will be used to improve services and evidence impact.


Frequently Asked Questions

How many questions should an outcome survey have?

Aim for 10-15 questions for most charity outcome surveys. This typically allows for a validated wellbeing scale (4-7 items), 3-5 custom outcome questions, and 2-3 open-ended questions. Research shows this range optimises the balance between data richness and completion rates. If you need more data, consider splitting into multiple shorter surveys distributed at different points rather than one long survey.

Should I use the same survey for all programmes?

Use a core set of questions across all programmes (e.g., a standard wellbeing measure) supplemented with programme-specific outcome questions. This gives you organisation-wide data for strategic planning while capturing the specific outcomes each programme is designed to achieve. Plinth makes it easy to create survey templates with core questions and customisable programme-specific sections.

How do I handle participants who cannot complete surveys independently?

Some participants may face barriers to survey completion — literacy challenges, language barriers, cognitive difficulties, or digital exclusion. Options include: worker-assisted completion (the practitioner reads questions and records answers), translated versions, Easy Read versions, or alternative data collection methods (structured interviews, observation tools). Always offer alternatives rather than excluding people from your outcome data.

What response rate should I aim for?

Aim for at least 60% completion for both pre and post surveys, though 80%+ is ideal. Response rates below 50% introduce significant bias — the people who do not complete surveys may have systematically different outcomes from those who do. In-session completion (distributing surveys during programme sessions) consistently achieves the highest rates, typically 80-95%.

Can I use retrospective pre-post questions instead of actual baseline surveys?

Retrospective questions ("Looking back to before the programme, how would you rate your confidence?") are sometimes used as a shortcut when baseline data was not collected. While better than no comparison data, they are less reliable than actual pre-post measurement because participants' recall is influenced by their current state. Use retrospective questions as a supplement to, not a replacement for, genuine baseline measurement.


Conclusion

Effective outcome survey design is part science, part craft. By starting with your outcomes, choosing appropriate measurement tools, structuring your survey for maximum completion, and piloting with real participants, you can create surveys that produce the evidence your funders need and the insights your programmes need to improve.

Ready to build your outcome surveys? Plinth provides the survey builder, participant linking, and AI analysis to make outcome measurement practical for charities of all sizes. Book a demo to see it in action.

Recommended Next Pages


Last updated: February 2026

For more information about designing and deploying outcome surveys with Plinth, contact our team or schedule a demo.