How to Collect Impact Data Without Overburdening Charities

Practical approaches to proportionate impact monitoring that reduce admin for charities while keeping funders accountable. Includes templates, methods, and AI tools.

By Plinth Team

Impact data is the evidence that tells funders, trustees, and the public whether charitable work is making a difference. Collecting it is not optional. But the way most of the sector collects it is broken. Too many funders ask for too much information, in bespoke formats, at arbitrary intervals, from organisations that are already stretched to breaking point.

According to IVAR's research on open and trusting grantmaking, charities are routinely asked to repackage similar information for different funders, on different dates, with different word counts and in different formats (IVAR, Better Reporting). The result is that frontline staff spend hours on paperwork instead of delivering services, and the data that emerges is often rushed, incomplete, or shaped to tell funders what they want to hear rather than what is actually happening.

The irony is that overburdening charities with reporting requirements produces worse data, not better. When monitoring feels like a compliance exercise, organisations treat it as one. When it is designed to be proportionate, useful, and embedded in the work itself, the data improves and so does the programme.

This guide is for funders who want rigorous impact data without creating unnecessary admin, and for charities looking for practical ways to streamline their own data collection. The two goals are not in conflict. With the right approach, less burden produces better evidence.

What you will learn:

  • Why excessive monitoring requirements harm both data quality and charity capacity
  • How to design proportionate monitoring that matches grant size and risk
  • What "collect once, report many times" looks like in practice
  • Which data collection methods reduce burden on frontline staff
  • How AI tools can automate the routine while improving the evidence

Who this is for: Grantmakers designing monitoring frameworks, charity managers implementing data collection systems, programme officers overseeing grant portfolios, and monitoring and evaluation staff looking for practical ways to improve data quality without increasing workload.


Why Does Overburdening Charities Produce Worse Data?

The assumption behind heavy reporting requirements is straightforward: more questions, more data, better accountability. In practice, the opposite is true. When reporting becomes a burden, the quality of the information drops.

There are over 170,000 registered charities in England and Wales (Charity Commission register, 2024). The vast majority are small organisations. According to the NCVO UK Civil Society Almanac 2024, 80% of voluntary organisations have an income below 100,000 pounds, yet these organisations account for just 3% of the sector's total income (NCVO Almanac 2024). They are the organisations least able to absorb complex monitoring requirements, and the ones most likely to be damaged by them.

When frontline staff are asked to fill in lengthy monitoring forms after every session, several things happen. First, the forms get completed in a rush, often at the end of a long day, with less care and accuracy than anyone would like. Second, staff begin to see monitoring as an obstacle rather than a useful part of their work, which means compliance drops over time. Third, the data that does arrive is often shaped by what staff think the funder wants to hear, rather than an honest account of what happened.

The Foundation Practice Rating 2024 assessed 100 UK foundations on transparency, accountability, and diversity. Only 11 received an A overall, and accountability scores were notably lower than transparency scores, with 29% of foundations scoring a D for accountability (Foundation Practice Rating 2024). Better monitoring design is part of the solution.

What Does Proportionate Monitoring Look Like?

Proportionate monitoring is not about lowering standards. It is about right-sizing the ask to the grant. IVAR's Open and Trusting initiative, now supported by over 170 UK funders making grants worth over 1 billion pounds annually, commits signatories to ensuring that reporting requirements are "well understood, proportionate and meaningful" (IVAR, Open and Trusting).

In practice, proportionate monitoring means applying different reporting expectations based on grant size, risk, and complexity. The table below illustrates one common framework.

Grant sizeReporting frequencyTypical formatDepth of evidence
Under 5,000 poundsEnd-of-grant onlyShort narrative (500 words) plus 2-3 key numbersOutputs only (e.g. people reached, sessions delivered)
5,000 to 25,000 poundsMid-point check-in plus final reportStructured template (1-2 pages)Outputs plus short outcome narrative
25,000 to 100,000 poundsQuarterly or six-monthly updates plus final reportStandardised monitoring form plus financial summaryOutputs, outcomes, and 1-2 case studies
Over 100,000 poundsQuarterly updates, annual review, final evaluationDetailed report with data tablesFull outcome framework, case studies, and evaluation

The key principle is that every piece of information you ask for should serve a clear purpose. If you are not going to read it, do not ask for it. If you already have it from the application, do not ask again. If it duplicates what another funder is collecting, consider whether you can accept the same report.

IVAR's Better Reporting research found that charities are often asked to produce bespoke reports for individual funders when the underlying information is substantially the same. The six principles developed by IVAR and Esmee Fairbairn Foundation include a core commitment that funders should only ask for information they will actually need and use (IVAR, Better Reporting).

How Can Funders Reduce Duplication Across Grants?

One of the largest sources of unnecessary burden is duplication. A charity with five active grants may need to produce five separate monitoring reports covering largely the same work, each in a different format, at different intervals, measuring slightly different things. The administrative cost is significant.

The 360Giving Data Standard offers one route to reducing this duplication. Over 300 UK grantmakers now publish their grants data using the standard, covering more than 1 million grants worth over 300 billion pounds (360Giving, 2025). When funders share data openly, it becomes easier to see what other funding an organisation holds and what has already been reported elsewhere.

Practical steps funders can take to reduce duplication:

  1. Accept existing reports. If a charity has already produced a monitoring report for another funder covering the same programme, accept it. Ask a supplementary question or two if needed, but do not require a fresh document.

  2. Align reporting timelines. If several funders are supporting the same organisation, coordinate reporting dates so that the charity can produce one round of data and share it with all relevant funders.

  3. Use standardised outcome measures. Where sector-wide outcome frameworks exist (such as the Outcomes Star), use them. Standardisation reduces the cognitive burden on frontline staff and makes data comparable across grants.

  4. Pre-populate monitoring forms. Pull forward information from the original application so that grantees are only asked to report on what has changed, not re-enter what you already know.

  5. Share monitoring reports with other funders. With the grantee's consent, share completed reports with co-funders so that the charity does not have to produce multiple versions.

These are not radical suggestions. They are basic operational hygiene. Yet many funders have not implemented them because their internal processes were designed around individual grants rather than the grantee's experience.

What Data Collection Methods Reduce Burden on Frontline Staff?

The most effective way to reduce burden is to change how data is captured. Traditional monitoring relies on manual data entry: a staff member opens a spreadsheet, types in numbers, writes a narrative, and submits. This is slow, error-prone, and feels like admin.

NPC's guidance on impact measurement for small charities recommends prioritising a small number of meaningful outcomes rather than trying to measure everything, and using lightweight collection methods that fit naturally into existing workflows (NPC, Keeping It In Proportion). The Charity Digital Skills Report 2024 found that 31% of charities describe themselves as poor at or not engaging with collecting, managing, and using data (Charity Digital Skills Report 2024).

Methods that genuinely reduce frontline burden include:

  • Short SMS or messaging check-ins. A three-question survey sent by text message after a session takes 30 seconds to complete, compared to 10-15 minutes for a web form.
  • Photo-based evidence. Staff take a photo of an activity or event, which serves as both evidence and a record. No typing required.
  • Paper registers with digital capture. Many frontline settings still use paper sign-in sheets because they are fast and familiar. The bottleneck is not the paper; it is the manual transcription into a database afterwards.
  • Voice-recorded reflections. A support worker records a two-minute voice note about a session rather than writing a 500-word summary. The recording captures nuance and context that written notes often miss.
  • Routine outcome measures embedded in sessions. Tools like the Outcomes Star or Warwick-Edinburgh Mental Wellbeing Scale are designed to be completed by or with beneficiaries as part of a session, not as homework afterwards.

The common thread is that data collection happens inside the work, not after it. When capturing evidence is part of running a session rather than a separate administrative step, compliance increases and data quality improves.

How Can AI Remove the Data Entry Bottleneck?

The methods above reduce burden but do not eliminate the underlying problem: someone still has to turn raw information into structured data. A photo of a sign-in sheet is evidence, but it is not attendance data until someone types the names into a system. A voice note is rich in detail, but it is not a case study until someone transcribes and edits it.

This is where AI makes a practical difference. Not in the speculative, futuristic sense, but in solving specific, mundane bottlenecks that waste hours of staff time every week.

Photograph a paper register, get structured attendance data. Many charities use paper sign-in sheets because they are fast and accessible. The problem comes afterwards, when someone has to manually enter those names into a database. AI-powered optical character recognition can extract handwritten names from a photographed register, match them against a membership list, and flag any uncertain readings for human review. The data entry step, which might take 20 minutes per session, is reduced to a 30-second photograph and a quick confirmation.

Voice-record a conversation, get a structured case study. A support worker can record a two-minute reflection after a session or, with consent, record a conversation with a beneficiary. AI transcription converts the audio to text, and AI summarisation extracts the key themes, outcomes, and quotes into a structured case study format. What previously required 30-45 minutes of writing can be completed in under 5 minutes.

Generate funder reports from existing data. Rather than writing a fresh narrative report from scratch for each funder, AI can draft a report based on the monitoring data, case studies, and outcome measurements already held in the system. The staff member reviews and edits the draft rather than starting from a blank page. This is particularly valuable for charities reporting to multiple funders, where the same underlying data needs to be presented in different formats.

Tools like Plinth implement all three of these capabilities. The platform's scan paper register feature uses AI to extract attendance data from photographs of handwritten sign-in sheets. Its live recording studio transcribes and summarises voice recordings into structured case notes and case studies. And its AI report writer generates draft funder reports from the charity's actual programme data. Plinth offers a free tier, making these tools accessible to small organisations with limited budgets.

The important point is that AI does not replace human judgement. It replaces data entry. The support worker still decides what is significant. The programme manager still reviews the report. The funder still reads the evidence and makes decisions. AI simply removes the tedious, time-consuming step of turning raw observations into structured data.

What Should a Good Monitoring Framework Include?

A well-designed monitoring framework balances four concerns: accountability to funders, usefulness to the delivery organisation, respect for beneficiary time, and proportionality to the grant. Here is a practical framework that addresses all four.

Outputs

Outputs are the direct, countable products of activity: sessions delivered, people reached, hours of support provided. They are easy to collect, hard to dispute, and should be the foundation of any monitoring system. For small grants, outputs alone may be sufficient.

Outcomes

Outcomes describe change: skills gained, wellbeing improved, employment secured. They are harder to measure but more meaningful. For grants above 5,000 pounds, including at least one or two outcome measures makes reporting significantly more useful. Use validated tools where they exist, such as the Warwick-Edinburgh Mental Wellbeing Scale for mental health, or the Outcomes Star for holistic support.

Case studies

A single well-told story of change can be more compelling than a table of numbers. Case studies should capture the beneficiary's starting point, the support received, and the change that resulted. They are particularly effective when generated from real interactions rather than written retrospectively from memory.

Beneficiary feedback

Short, anonymous feedback from the people using the service provides a check on whether the data tells the full story. A three-question survey at the end of a programme is enough: was the service useful, would you recommend it, what could be improved?

Learning and reflection

The most useful monitoring reports include a section where the delivery team reflects on what worked, what did not, and what they would change. This is often the section funders find most valuable, yet it is frequently omitted because staff run out of time filling in the quantitative sections.

How Should Funders Handle Different Grant Types?

Not all grants are the same, and monitoring should reflect that. The biggest mistake funders make is applying a one-size-fits-all framework across their entire portfolio. A micro-grants programme, a multi-year strategic partnership, and a capital build project each need fundamentally different approaches.

Grant typeMonitoring approachEvidence focusReporting effort
Micro-grants (under 5,000 pounds)End-of-grant snapshotOutputs, photos, short narrative30 minutes to 1 hour
Project grants (5,000 to 50,000 pounds)Mid-point plus final reportOutputs, 1-2 outcome measures, 1 case studyHalf a day per report
Strategic partnerships (over 50,000 pounds)Quarterly updates, annual review, final evaluationFull outcome framework, multiple case studies, learning1-2 days per quarter
Capital grantsMilestone-based reportingProgress against plan, photos, financial reconciliationVaries by milestone
Emergency or crisis grantsLight-touch narrative at completionWhat happened, who was reached, what was the immediate result1 hour maximum

IVAR's research on statutory funding found that monitoring requirements increase significantly for grants over 50,000 pounds, and that some prescribed outcomes are often seen as the minimum expectation for large-scale grants (IVAR, Reconsidering Risk). The challenge is to ensure that the increase in reporting for larger grants is genuinely proportionate rather than simply a legacy of risk-averse institutional habits.

For micro-grants in particular, the administrative cost of monitoring can easily exceed the cost of the grant itself. A proportionate approach for a 1,000-pound community grant might be a single page: what did you do, who benefited, what did you learn. That is enough.

How Can Charities Reuse Data Across Multiple Funders?

The "collect once, report many times" principle is one of the most effective ways to reduce burden. If a charity collects high-quality data as part of its normal operations, it should be able to use that data to satisfy multiple funders without starting from scratch each time.

In practice, this requires three things:

  1. A single source of truth. All programme data, attendance records, outcome measurements, case studies, and financial information should live in one place. When data is scattered across spreadsheets, email attachments, and paper files, every funder report requires a manual assembly process.

  2. Flexible reporting formats. The underlying data stays the same, but the presentation changes to meet each funder's requirements. An AI-powered reporting tool can take the same attendance data, outcome scores, and case studies and present them in different formats, word counts, and structures depending on the funder.

  3. Good data from the start. If the original data collection is sloppy, no amount of clever reporting will fix it. Investing in simple, low-burden data capture at the point of delivery pays dividends when report writing begins.

Platforms like Plinth are designed around this principle. Data enters the system once, whether through photographed registers, voice recordings, survey responses, or manual entry, and is then available to generate tailored reports for different funders. The AI report writer draws on the same underlying data to produce distinct funder reports, each formatted to the specific requirements. Charities using Plinth's free tier can access the core data collection and reporting features without any cost.

This approach transforms reporting from a dreaded quarterly exercise into a near-automatic process. The charity's main responsibility shifts from writing reports to checking that the AI-generated drafts accurately reflect the work.

How Can Funders Close the Loop and Share Learning?

Collecting data is only half the equation. What funders do with the data they receive determines whether the monitoring process is useful or merely bureaucratic.

The most common complaint from charities about monitoring is not that they have to do it, but that it appears to disappear into a void. They submit reports and hear nothing back. No feedback, no questions, no evidence that anyone read it. This is demoralising and wasteful.

Effective funders close the loop in several ways:

  • Provide feedback on every monitoring report. Even a brief email acknowledging receipt and highlighting one observation shows that the report was read and valued.
  • Share anonymised insights across the portfolio. If common challenges or successful approaches emerge from multiple grants, share those findings with all grantees. This turns individual monitoring into collective learning.
  • Adjust programme guidance based on evidence. If monitoring data reveals that a particular approach is not working, use that evidence to update guidance for future rounds. This demonstrates that the data has a purpose beyond compliance.
  • Publish aggregate findings. Publishing portfolio-level findings, anonymised where necessary, contributes to the sector's evidence base and demonstrates that monitoring serves a wider purpose.

IVAR's Better Reporting principles include a commitment that funders should give feedback on every grant report they receive (IVAR, Better Reporting). This is a low-cost action that significantly improves the grantee experience and the quality of future reporting.

When charities see that their data leads to better programmes, better funding decisions, and genuine learning, reporting stops feeling like a burden and starts feeling like a contribution. That shift in perception is worth more than any redesigned template or shortened form.

Frequently Asked Questions

Do small grants really need monitoring reports?

Yes, but the reports should be proportionate to the grant size. For grants under 5,000 pounds, a brief end-of-grant snapshot covering what was delivered, who benefited, and one or two key numbers is usually sufficient. IVAR's guidance suggests that light-touch reporting for small grants reduces burden while still providing adequate accountability.

Can funders accept voice recordings or photographs as evidence?

Yes. Voice recordings, photographs, and video clips are all valid forms of evidence, particularly for qualitative outcomes that are difficult to capture in a form. AI tools can transcribe voice recordings into structured text and extract data from photographs of paper registers. Plinth's live recording studio converts voice-recorded conversations into structured case notes automatically.

How do we avoid asking charities for information we already have?

Pre-populate monitoring forms with data from the original application, including organisational details, project descriptions, and intended outcomes. Only ask grantees to report on what has changed since the application was submitted. Over 300 UK funders now publish grants data through the 360Giving Data Standard, which helps identify what information is already available publicly (360Giving, 2025).

What if different funders require different outcome measures?

Where possible, use standardised outcome frameworks that multiple funders recognise. When that is not possible, collect the data once in a detailed format and then extract the specific measures each funder needs. AI-powered reporting tools can generate tailored reports from the same underlying data set, saving charities from duplicating their data collection for each funder.

How often should funders request monitoring reports?

Frequency should match the grant period and size. For grants under 12 months, a single end-of-grant report is usually proportionate. For multi-year grants, six-monthly or quarterly updates are reasonable for larger awards. IVAR recommends that funders consider phone calls or site visits as alternatives to written reports for mid-grant check-ins, reducing the administrative load while maintaining a genuine relationship.

Is it worth investing in a monitoring platform for a small charity?

It can be, provided the platform does not create more work than it saves. The key is to choose a tool that simplifies data entry rather than just digitising paper forms. Platforms like Plinth offer a free tier specifically designed for smaller organisations, with features such as AI-powered register scanning and voice-to-case-study that reduce manual data entry rather than just moving it online.

How can we ensure beneficiary data is collected ethically?

Follow GDPR principles: collect only what is necessary, explain clearly how data will be used, obtain consent, and anonymise where possible. Beneficiary feedback should be voluntary, not coerced. When using voice recording, ensure informed consent is obtained from all participants. The ICO's guidance on data protection for charities provides a practical starting point.

What is the single most effective change a funder can make?

Accept existing reports. If a charity has already produced a monitoring report for another funder that covers the same programme, accepting that report, perhaps with one or two supplementary questions, eliminates the single largest source of unnecessary duplication in the sector. It costs the funder nothing and saves the charity hours of work.

Recommended Next Pages


Last updated: February 2026