If You Work in Programmes

A practical role guide for charity programme and delivery staff covering theory of change, outcomes measurement, safeguarding, funder reporting, and organisational tensions.

By Plinth Team

TL;DR You do the work the charity exists to do. Everything else — the fundraising, the governance, the finance — is infrastructure that supports your delivery. But that does not mean the rest of the organisation exists to serve you. Understanding how your work connects to income, accountability, and strategy is what separates effective programme staff from people who resent every reporting request. The best programme workers are deeply skilled practitioners who also understand the organisational machine they operate within.

What this role optimises for

Programme delivery optimises for beneficiary outcomes. You are trying to make the biggest positive difference to the people or communities you serve, within the constraints of your funding, your capacity, and your evidence base.

Your secondary optimisation is learning and adaptation. The best programmes evolve based on what is actually happening, not just what the original proposal said would happen. This requires honest data collection, genuine reflection, and the confidence to tell funders when things are not going to plan.

This is harder than it sounds. Funders want good news. Boards want assurance. Fundraisers want compelling stories. Being honest about what is and is not working, while maintaining confidence in the programme's direction, is a skill that takes years to develop. The pressure to overstate success and understate problems is constant and corrosive.

The jargon you need to know

  • Theory of change: A structured model showing how your activities lead to short-term outputs, medium-term outcomes, and long-term impact. Not a one-off document — it should be a living framework that guides programme design and evaluation. If your theory of change sits in a drawer and only comes out for funding applications, it is not working.
  • Outputs vs outcomes: Outputs are what you do (sessions delivered, people seen, reports published). Outcomes are the changes that result (improved wellbeing, increased employment, reduced reoffending). Funders and evaluators care about the difference. So should you — because a programme that generates lots of outputs without outcomes is busy, not effective.
  • Case management: The process of supporting individual beneficiaries through a structured journey — assessment, planning, intervention, review. Often supported by a case management system (CMS or database). The quality of your case management data determines your ability to demonstrate impact, track safeguarding concerns, and report to funders.
  • Beneficiary data: Personal information about the people you work with. Subject to GDPR, safeguarding obligations, and ethical considerations about consent and power. Collecting it is necessary; being cavalier with it is unforgivable. Remember that for many beneficiaries, sharing personal information with an organisation involves real trust and real risk.
  • Safeguarding: The policies and practices that protect children and vulnerable adults from harm. In programme delivery, this is not a compliance checkbox — it is a daily operational reality. You need to know your organisation's procedures cold: who your designated safeguarding lead is, what triggers a referral, how to record concerns, what to do if the concern involves a colleague.
  • Restricted funding: Money tied to a specific project or activity. If your programme is funded by a restricted grant, you can only spend that money on the things specified in the grant agreement. This constrains how you adapt. If you realise halfway through that the beneficiaries need something different from what you proposed, you may need funder permission to change course — and not all funders are flexible.
  • Monitoring and evaluation (M&E): The systems by which you track what you are doing and assess whether it is working. Monitoring is ongoing data collection; evaluation is periodic assessment of effectiveness. Both are necessary and both take time that competes with delivery.
  • Logic model: A visual representation of your programme's theory of change, typically showing inputs, activities, outputs, outcomes, and impact in a linear or nested diagram. Useful for planning and communication, dangerous if it makes a complex programme look like a simple conveyor belt.
  • Partnership working: Delivering services in collaboration with other organisations — referral partners, subcontractors, consortium members, informal allies. Sounds simple; involves navigating different organisational cultures, IT systems, reporting requirements, professional frameworks, and egos. Good partnerships take longer to set up than people expect and deliver more than people expect once they work.
  • Fidelity: The degree to which a programme is delivered as designed. Important for evidence-based interventions where the evidence depends on consistent delivery. If you are running a programme based on a specific model, drifting from that model means your outcomes data may not reflect what the evidence base predicted.
  • Informed consent: The process of ensuring beneficiaries understand what data you are collecting, why, and how it will be used — and that they have genuinely agreed to participate. In contexts involving vulnerability, power imbalance, limited literacy, or limited English, this requires real thought about accessibility, not just a form with a signature box.

The metrics that matter

The metrics depend on your programme, but the categories are consistent. Track reach (how many people you are working with), dosage (how much support each person receives), completion (how many finish the programme), outcomes (what changes for participants), and satisfaction (what participants think of the service).

Do not confuse activity with impact. Running 200 sessions is an output. Whether those sessions changed anything for anyone is the question that matters. At the same time, be realistic about what you can measure. Not every programme can run a randomised controlled trial, and claiming outcomes you cannot evidence is worse than being honest about uncertainty. "We believe this is working based on participant feedback, practitioner observation, and our professional judgement, but we have not yet been able to measure long-term outcomes rigorously" is a more credible statement than inflated claims backed by weak data.

Track your unit costs — cost per beneficiary, cost per outcome achieved. Your finance team and funders will want these numbers, and understanding them yourself gives you leverage in budget discussions. If you do not know your unit costs, someone else will calculate them for you, and they may not be kind about the result.

Watch for perverse incentives in your metrics. If you are measured on the number of people completing a programme, you have an incentive to screen out people who are harder to support. If you are measured on employment outcomes, you have an incentive to cherry-pick participants most likely to find work anyway. If you are measured on session attendance, you have an incentive to count heads rather than assess engagement. Good metrics acknowledge these tensions and build in protections against gaming.

What you will spend your time on

Direct delivery — working with beneficiaries, running sessions, managing cases, facilitating groups, conducting assessments, making referrals, following up. This is the core of the role and should take the majority of your time, but it almost certainly will not take as much of your time as you would like.

Around that: recording data and writing case notes (thorough case notes are not bureaucracy — they are professional practice and safeguarding protection), preparing funder reports, attending team meetings and supervision, contributing to monitoring and evaluation processes, managing partnerships and referral relationships.

If you are a programme manager, add budget management, staff supervision and development, strategic planning, managing funder relationships, contributing to organisational strategy, and a significant amount of time spent in meetings about things that feel tangential to delivery but determine whether delivery can happen.

You will also spend time explaining your work to people who do not do it. Fundraisers need stories and data for proposals. Trustees need assurance that the money is being well spent. Finance needs evidence that restricted funds are being used correctly. The CEO needs headline numbers for the annual report. Treating these requests as distractions from "real work" is understandable but counterproductive. The fundraiser who cannot tell your story will not raise the money for your next grant. The trustee who does not understand your outcomes cannot defend your programme when budgets are cut.

Safeguarding takes time that does not show up in any work plan. Concerns arise unpredictably. Responses must be immediate. Recording must be thorough. If you are a designated safeguarding lead, this responsibility sits on top of everything else and takes priority over all of it when it is triggered.

What people in this role often misunderstand about the rest of the organisation

Fundraisers need case studies and data because that is how they secure the money that pays your salary. When a fundraiser asks for a beneficiary quote or an outcomes summary, they are not extracting value from your work for their own purposes — they are building the case that keeps your programme funded. Make it easy for them. Agree a data-sharing process at the start of the year, not in a panic before a deadline. Set up a system for collecting case studies (with proper consent) throughout the year, so you have a bank to draw from.

Better yet, involve fundraisers in your work. Invite them to a session (with appropriate consent from participants). Help them understand what the programme looks like in practice, what the challenges are, and what success actually feels like. The proposals they write will be better for it, and so will the funding they secure.

Finance tracks restricted income so carefully because misusing it is illegal. If a funder gave £30,000 for your youth programme and you spend some of it on an unrelated staff training day, that is a breach of trust law. It does not matter that the training was useful. It does not matter that the youth programme is going well. The money was given for a specified purpose and must be spent on that purpose. When the finance team asks you to code expenses correctly or queries a purchase, they are protecting the organisation. The form-filling is irritating; the alternative — a Charity Commission investigation, repayment demands from funders, reputational damage — is worse.

Governance decisions affect your delivery more than you might realise. When the board sets the risk appetite, approves the strategy, or signs off a reserves policy, those decisions cascade into what programmes get funded, how much flexibility you have, how many staff you can employ, and what happens if income drops. A board decision to build reserves might mean your programme loses a post. A board decision to diversify income might mean your programme gets expanded. Understanding how governance works helps you influence it constructively — through well-written programme reports, through invitations to site visits, through clear articulation of what your programme needs — rather than being surprised by decisions that feel arbitrary.

The debates that affect your work

The debate about impact measurement burden is directly relevant to your daily life. The sector is grappling with whether the cumulative weight of monitoring, evaluation, and reporting requirements from multiple funders is proportionate to the accountability it provides — or whether it diverts staff time from the delivery it is supposed to be measuring.

This connects to broader questions about what counts as evidence, who decides, and whether the push for quantified outcomes marginalises the kinds of change that are hardest to measure — shifts in confidence, strengthened relationships, a sense of belonging, the prevention of something that did not happen. These are not excuses for avoiding measurement. They are genuine methodological challenges that deserve honest engagement.

What to read next