How to Standardise Impact Reporting Across Programmes

Practical approaches for funders to collect consistent impact data across multiple grant programmes with different outcomes frameworks, without sacrificing flexibility.

By Plinth Team

Standardising impact reporting across multiple grant programmes is one of the most persistently difficult challenges in the grantmaking sector. Each programme has its own theory of change, its own funder requirements, and its own grantee population — and yet funders need consistent data to compare performance, demonstrate portfolio-level impact, and avoid imposing contradictory or duplicative demands on grantees. Doing it well requires a distinction that many organisations miss: standardising the architecture of reporting without standardising the content.

The distinction matters because no two programmes have identical outcomes. A mental health grant programme and a community sports programme will never share the same primary indicators. But they can share the same data structure — the same questions about how data was collected, the same categories for beneficiary demographics, the same format for narrative evidence — and that structural consistency is what makes aggregation and comparison possible.

This guide sets out a practical framework for achieving that balance: common enough to enable portfolio-level analysis, flexible enough to capture what actually matters in each programme.

Why Standardisation Is Hard (and Why It Matters Anyway)

The grantmaking sector has been trying to standardise impact reporting for decades without fully solving the problem. The reason is a genuine tension between two legitimate needs.

On the funder side: boards, trustees, and senior leadership need to understand what their grants portfolio is collectively achieving. When ten different programmes all produce differently structured reports, no one can add up the totals or compare performance across them. A grants director cannot tell a board that the portfolio collectively reached 50,000 beneficiaries if each programme counted and categorised beneficiaries differently.

On the grantee side: organisations working in very different contexts resist being forced into reporting templates that do not fit their work. A homelessness charity and a youth employment programme measure fundamentally different things. Forcing them into a common indicator set either produces meaningless data (if indicators do not reflect the actual programme) or distorts programme design (if organisations change what they do to generate reportable numbers).

The solution is not to choose one need over the other — it is to separate what should be standardised from what should remain flexible.

According to research from Social Value UK, impact reporting in the UK charity sector varies enormously in quality and format, with many organisations reporting against funder requirements rather than against their own theory of change — a misalignment that reduces the quality of evidence for everyone (Social Value UK).

The Three-Layer Model for Standardised Reporting

A workable approach to cross-programme standardisation involves three layers, each with a different level of flexibility.

Layer 1: Universal Standards (Non-Negotiable)

These apply to every grant in every programme and make portfolio-level aggregation possible. They include:

  • Beneficiary demographics: consistent categories for age group, gender, ethnicity, disability status, and postcode or area. These categories should align with ONS standards so that funder data can be compared against population data.
  • Delivery data: consistent definitions for "session," "contact," "beneficiary," and "unique beneficiary." Without agreed definitions, one programme's 1,000 beneficiaries might represent 1,000 unique individuals while another's represents 200 people attending five sessions each.
  • Financial reporting format: a common structure for reporting expenditure against budget, even if budget categories vary.
  • Reporting cadence: a consistent rhythm (quarterly, six-monthly, annual) for each tier of grant, regardless of programme content.

Layer 2: Programme-Level Common Indicators

Within each programme or thematic area, a set of three to six indicators that all grantees in that programme report against. These are agreed at programme design stage and reflect the shared theory of change. For example, a financial inclusion programme might require all grantees to report: number of individuals receiving advice, number of debt cases resolved, and change in financial wellbeing score (using a validated tool). Different grantees within the programme will deliver differently, but these shared indicators allow performance comparison within the programme cohort.

Layer 3: Organisation-Specific Indicators

Each grantee agrees two to four additional indicators that reflect their particular approach, context, or innovation. These are not aggregated across the programme but form part of the learning record for that specific grant. They allow organisations to report on what is distinctive about their work without being constrained by the programme framework.

Comparison: Approaches to Cross-Programme Standardisation

ApproachAdvantagesDisadvantagesBest for
Single universal indicator setEasy to aggregate; simple for granteesForces inappropriate indicators on diverse programmes; poor data qualityHighly homogeneous programme portfolios
Programme-level indicator setsGood within-programme comparability; grantee buy-inDifficult to compare across programmesFunders with a few large programmes
Three-layer model (universal + programme + org)Cross-programme aggregation plus programme-level comparison; respects diversityRequires more upfront design; technology support neededMost funders with multiple programmes
Fully grantee-led reportingMaximum grantee autonomy; high-quality contextual dataImpossible to aggregate; no cross-programme comparisonTrust-based funders with learning focus only
Shared outcomes vocabulary (e.g. national TOMs)Common language across sector; benchmarking possibleMay not fit all programme areas; external dependencyFunders aligned with specific sector frameworks

The Role of Shared Outcomes Vocabularies

Several UK-wide outcomes frameworks have attempted to provide a common vocabulary that multiple funders can use, reducing the burden on grantees who report to multiple funders using different frameworks. The most widely used in the UK include:

Outcomes TOMs (Themes, Outcomes, Measures): Originally developed for social value measurement in public procurement, TOMs has been adopted by some grant funders, particularly in housing and local authority contexts. It provides a standardised set of outcomes with pre-defined measures.

NCVO's outcomes bank: NCVO has historically maintained a library of outcome statements that funders and charities can use to describe their intended changes, providing a common vocabulary without prescribing specific measurement approaches.

Esmée Fairbairn Foundation's approach: Esmée asks grantees to report against up to three outcomes they believe they can achieve by end of grant, together with indicators. This grantee-led approach within a structured framework produces consistent data structure while allowing flexibility in content.

The challenge with all shared vocabularies is that they reflect the priorities of whoever designed them. A vocabulary developed for health and social care may not translate well to arts and culture or environmental programmes. The most pragmatic approach for most funders is to adopt a shared vocabulary within each thematic area while maintaining consistent structural standards across all themes.

SORP 2026 and the New Tiered Reporting Framework

The revised Charities Statement of Recommended Practice (SORP 2026), effective for accounting periods beginning on or after 1 January 2026, introduces a tiered approach to impact and outcome reporting for charities (ICAEW, 2025):

  • Tier 1 (income up to £500,000): summary of main achievements during the year.
  • Tier 2 (income £500,000–£15 million): explanation of impact, including long-term effects on beneficiaries and society; summary of measures or indicators used; outputs achieved.
  • Tier 3 (income over £15 million): Tier 2 requirements plus review of fundraising performance and assessment of return on investment.

For funders, this tiered structure is directly relevant to proportionate reporting design. It provides a regulatory rationale for asking smaller grantees for less detailed evidence, and a sector-wide standard against which funder reporting requirements can be calibrated. Funders whose requirements exceed what SORP 2026 demands of an organisation at a given income level should be able to justify the additional ask clearly.

Practical Steps to Implement Standardised Reporting

Step 1: Audit your current reporting landscape. Before designing a new framework, document what each of your current programmes asks grantees to report. Map the indicators, reporting formats, and frequencies side by side. This audit almost always reveals significant duplication, contradiction, and unnecessary variation.

Step 2: Identify universal standards. From the audit, extract the elements that every programme already collects (or should collect): beneficiary demographics, contact and session counts, financial reporting structure. Agree common definitions for these across all programmes.

Step 3: Work with programme staff to agree programme-level indicators. For each thematic area or programme, convene programme officers and a sample of grantees to agree a shared indicator set. Three to six indicators is usually sufficient. Test these against the programme theory of change: do they measure what actually matters?

Step 4: Build flexibility in for grantee-specific indicators. Include space in every report for two to four additional indicators that the grantee proposes and that the programme officer approves at award stage.

Step 5: Invest in technology that separates structure from content. Spreadsheets and generic form tools are poor at implementing layered reporting frameworks because they conflate structure and content. A dedicated grant management platform allows a funder to configure universal fields that appear on every form, programme-specific fields that appear only on relevant forms, and grantee-specific fields that vary by award — all within a single system that aggregates data consistently across the portfolio.

Technology Approaches: What to Look For

The technical implementation of cross-programme standardisation is where many initiatives fail. Common failure modes include:

  • Building separate reporting tools for each programme, making cross-programme aggregation impossible without manual work.
  • Using a single generic form for all programmes, losing programme-specific nuance.
  • Relying on narrative reports that cannot be systematically analysed.
  • Collecting data in formats that cannot be exported or integrated with other systems.

Effective platforms allow funders to define a common data schema (universal fields) and then extend it with programme-specific and grantee-specific fields — so that all data is collected in a consistent structure even when the specific questions vary. This is the approach taken by Plinth's reporting module, which enables funders to configure workplan and KPI frameworks at grant level while collecting data in a consistent structure across the portfolio, making cross-programme reporting possible without additional manual processing. You can explore how this works at Plinth's AI grant management page.

See also Real-Time Impact Dashboards for Funders for how standardised data feeds into portfolio-level monitoring.

Managing Grantee Resistance

Standardisation initiatives frequently generate resistance from grantees, and often for good reason. Grantees who have invested in their own monitoring and evaluation systems resent being asked to translate their data into a funder's format. Those working in areas with well-established measurement frameworks (mental health, housing) may find a generic funder framework a poor fit with sector-specific tools they already use.

The most effective way to manage this is through early co-design and explicit explanation. Tell grantees why the framework exists, what the data will be used for, and how their input shaped it. Where possible, align funder indicators with tools grantees already use — the Outcomes Star, validated wellbeing scales, standardised distance-travelled measures — rather than inventing new ones.

NCVO's Power of Small research found that grantees are far more likely to accept proportionate, well-explained reporting requirements when funders demonstrate that they actually use and act on the data they collect (NCVO, 2025). The corollary is that requirements which appear disconnected from funder learning generate the most resistance.

Cross-Programme Learning and the Value of Consistent Data

The strongest argument for standardisation is not administrative efficiency but learning. When data from ten programmes is in a consistent format, a funder can ask questions that would be impossible otherwise: Which beneficiary groups are being reached across the portfolio? Are outcomes better in programmes that use a particular delivery model? Is there a relationship between grant size and outcomes achieved per pound?

This kind of cross-programme analysis is increasingly expected by foundations' own boards and donors, and by the sector more broadly. The new SORP 2026 framework implicitly endorses it by requiring larger charities to assess "the actual or intended return on the funds invested." Funders who collect consistent data across programmes are better positioned to demonstrate that return — and to improve it over time.

For more on impact data collection, see How Charities Struggle to Collect Impact Data and What Is Impact Measurement.

Frequently Asked Questions

Why is standardising impact reporting across programmes so difficult?

The core difficulty is the tension between two legitimate needs: funders need consistent data to aggregate across programmes, and grantees need reporting that reflects the reality of their particular work. Programmes in different thematic areas have genuinely different outcomes that cannot all be captured by a single indicator set. The solution is to standardise data architecture (structure, demographics, definitions) while allowing flexibility in content (specific indicators and measures).

What is a common outcomes framework?

A common outcomes framework is an agreed set of outcome categories, indicators, and definitions that multiple grantees or programmes report against. It makes cross-grantee and cross-programme comparison possible. Examples in the UK include Outcomes TOMs and NCVO's outcomes bank. Most large funders develop their own frameworks for their core programme areas rather than adopting off-the-shelf approaches wholesale.

How many indicators should grantees report against?

Research and practitioner guidance consistently suggests that three to six indicators per programme is sufficient for most grant sizes. More indicators rarely produce better evidence — they produce more data points, many of which are never analysed. The quality of a small number of well-chosen indicators outweighs the quantity of a large number of poorly understood ones.

How does SORP 2026 affect reporting requirements?

The revised Charities SORP, effective from January 2026, introduces tiered outcome reporting for charities based on income. Tier 1 organisations need only a summary of achievements; Tier 2 organisations must explain their impact and the measures used; Tier 3 organisations face additional requirements including return-on-investment assessment. Funders whose requirements exceed SORP obligations for smaller grantees should be able to justify the additional ask explicitly.

Can grantees use their existing monitoring tools rather than funder-specific forms?

Where possible, yes — and funder frameworks should be designed to accommodate data from standard sector tools (Outcomes Star, WEMWBS, PHQ-9, etc.) rather than requiring grantees to collect the same information twice. Where a grantee already collects relevant data in a compatible format, funders should accept data exports rather than requiring separate form completion.

What is the three-layer model for standardised reporting?

The three-layer model separates universal standards (non-negotiable cross-portfolio requirements like consistent demographic categories and delivery definitions), programme-level common indicators (three to six shared indicators for all grantees within a programme), and organisation-specific indicators (two to four custom indicators agreed at award stage for each grantee). This allows cross-programme aggregation at the universal level, within-programme comparison at the programme level, and individual learning at the grantee level.

How should funders handle grantees who resist standardised reporting?

Through early co-design, transparent explanation of why data is needed and how it will be used, and calibrating requirements to grant size. Resistance is often a signal that a requirement is poorly explained or genuinely disproportionate, not that grantees are unwilling to be accountable. See Proportionate Evaluation for Small Grants for approaches to right-sizing requirements.

What technology is needed to support cross-programme standardisation?

A grant management platform that supports configurable, layered reporting forms — with universal fields shared across all programmes, programme-specific fields, and grantee-specific fields — and that aggregates data consistently regardless of which fields are populated. Spreadsheets and generic form tools are poorly suited to this because they conflate structure and content.

Recommended Next Pages


Last updated: February 2026