Impact Reporting in the AI Era

How AI is transforming impact reporting for charities and funders — from automated data aggregation and AI-generated narratives to real-time dashboards and personalised donor reports.

By Plinth Team

Impact reporting has long been one of the most time-consuming obligations in the charity sector. Charities collectively spend an estimated 15.8 million hours every year filling out reports for funders, with the average grant taking 40 hours to report on (Plinth/Timetospare analysis). That figure does not include the hours spent writing annual impact reports, donor updates, or board papers. For most organisations, impact reporting is not a strategic function — it is a survival task, squeezed between service delivery and fundraising.

AI is beginning to change that. Not by replacing the human judgement and lived experience that sit at the heart of meaningful impact communication, but by handling the mechanical work: aggregating data from multiple sources, structuring narratives from raw programme records, generating first-draft report sections, and making it possible to personalise communications at a scale that was previously impossible for small teams.

This guide explains how AI is transforming impact reporting for both charities and funders — what the technology can genuinely do now, where the limits lie, and how to build a reporting process that is faster, more credible, and more useful to the audiences who matter.

Why Impact Reporting Is Broken

The core problem with impact reporting is structural. Data lives in multiple places — spreadsheets, case management systems, paper records, survey tools, finance software — and bringing it together for each reporting deadline is done manually by staff who have no dedicated time for it.

According to the Plinth/Timetospare analysis of UK grantmaking, charities with enough income to employ at least one full-time member of staff typically have between five and ten active funders simultaneously, each requiring a different report format, different metrics, and different terminology. Even when two funders want the same information, their question structures rarely align. The result is that charities end up with five to ten report templates they fill in from scratch each cycle, duplicating effort at every turn.

This problem compounds at the sector level. UK grantmakers distributed over £20 billion in grants in 2022-23 (UKGrantmaking, 2024), and the volume of reporting that generates runs into tens of millions of documents annually. Yet research has found that the primary purpose of much of this data appeared to be demonstrating rigour to funders rather than informing programme improvement. Academic analysis of charity impact reporting — including Julia Morley's widely cited work on "business washing" — found little evidence that charities were using the data they collected to deliver better results.

AI cannot fix the structural mismatch between funder reporting requirements. But it can dramatically reduce the cost of meeting those requirements — and, crucially, it can unlock the data that is already being collected so it becomes useful beyond the report it was gathered for.

What AI Can Actually Do in Impact Reporting

The capabilities of AI in impact reporting fall into four broad categories, each at a different stage of maturity:

Data aggregation and standardisation. AI tools can connect to multiple data sources — programme databases, CRM systems, finance software, survey platforms — and pull together a unified dataset without manual export and re-entry. This is the most mature and reliable application; it is essentially structured data processing, well within the capabilities of current systems.

Narrative generation from structured data. AI can take a dataset of outcomes — number of people supported, improvement scores, case study summaries — and generate a structured narrative section. According to Raisely's 2025 Fundraising Benchmarks research, 47% of fundraisers see AI as their biggest opportunity for digital fundraising, and 48% of charities are now using AI to draft documents and reports, up from 28% the previous year (Charity Digital Skills Report, 2025). This is the fastest-growing application, though it requires careful human review.

Real-time dashboards for funders and boards. Rather than producing periodic reports, AI-powered dashboard tools allow funders and trustees to view live programme data whenever they need it. This transforms the relationship between funders and grantees from periodic accountability to continuous transparency.

Personalised donor reports. AI can generate individualised impact updates for donors — telling each person specifically what their contribution funded, with relevant outcomes data. This was previously viable only for major donors; AI makes it possible at any scale.

The State of AI Adoption in the Charity Sector

Adoption of AI across the charity sector has accelerated sharply. According to Civil Society News, three-quarters of UK charities are now using AI tools, up from 61% the previous year. The proportion using AI strategically (rather than experimentally) has risen from 11% to 25% in a single year (Charity Digital Skills Report, 2025).

However, adoption is uneven. Larger, well-resourced organisations are pulling ahead rapidly while smaller charities struggle to keep pace. The same research found that 68% of small charities remain in the early stages of digital adoption. This matters for impact reporting specifically because smaller charities often have the most fragmented data and the least capacity to produce polished reports — which is precisely the context where AI assistance could have the greatest effect.

There are also legitimate concerns about AI accuracy in this domain. According to Nonprofit Tech for Good's 2026 AI statistics report, 63% of nonprofit professionals worry about accuracy in generative AI outputs. For impact reporting — where overstating outcomes carries real reputational and ethical risk — this concern is well-founded. Charities have long been criticised for "publication bias," cherry-picking positive results while burying neutral or negative findings. AI that is trained on existing reports could inadvertently amplify this tendency if not properly governed.

Real-Time Dashboards: From Periodic Reports to Continuous Transparency

One of the most significant shifts AI enables is the move from periodic impact reporting to continuous transparency. Traditional impact reports are snapshots — a summary of what happened over six or twelve months, produced weeks after the period ends and often read months after that. By the time a funder reads a mid-year report, the programme it describes may have already adapted significantly.

Real-time dashboards change this dynamic. Platform teams that integrate qualitative and quantitative data into live dashboards report that programme managers can identify problems and adapt within days, rather than waiting for the next quarterly report. Funders who have access to live data spend less time chasing progress updates and can have more substantive conversations about strategy and support.

The practical requirements for a live dashboard are not as demanding as they might seem. The data does not need to be complex — a weekly count of beneficiaries served, a running tally of outcomes achieved against targets, and a few case study summaries updated monthly will serve the majority of funder needs. What matters is that data entry happens consistently in real time, rather than being reconstructed retrospectively for each report cycle.

This is where integration between programme delivery systems and reporting platforms becomes essential. When case workers record interactions in a case management system, when survey responses are collected through a connected tool, and when activity logs are updated in real time, a dashboard can surface meaningful intelligence without any additional data entry. For charities using Plinth, programme data collected through the platform feeds directly into funder reports and dashboard views — eliminating the manual aggregation step that consumes so much staff time.

AI-Generated Narratives: Opportunity and Responsibility

The ability of AI to generate narrative text from structured data is both the most exciting and the most contested capability in this space. Done well, it can transform dry tables of numbers into a coherent story about what a programme achieved and why it mattered. Done poorly — or without adequate human oversight — it can produce plausible-sounding text that misrepresents what the data actually shows.

The appropriate framework for AI-generated narrative in impact reporting is human-in-the-loop production: AI generates a structured draft; a human with programme knowledge reviews, corrects, and enriches it; the final version reflects both the efficiency of AI and the judgement of someone who actually knows the work.

What AI is particularly good at is structure. Given a set of data points, it can produce a report that covers all the required elements in a logical order, uses consistent terminology, and matches the format a particular funder expects. What it cannot do is supply the contextual knowledge that explains why outcomes were what they were, what the team learned, and what it means for future delivery. Those elements must come from humans.

For charities producing multiple reports for multiple funders simultaneously, AI-assisted drafting can reduce the production time for each report from ten to fifteen hours to two or three — not because the AI does all the work, but because starting from a structured draft is far faster than starting from a blank page.

Personalised Donor Reports at Scale

Personalisation in donor communications has long been associated with major donor fundraising — bespoke updates sent by relationship managers to individuals who give £10,000 or more. AI is making it possible to deliver a version of this personalisation to mid-level and even small-scale donors at no additional marginal cost.

The principle is straightforward: rather than sending all donors the same generic impact newsletter, AI tools can generate individualised messages that reference each donor's specific giving history, connect it to relevant programme outcomes, and frame the impact in terms that match what the donor originally said they cared about. Research from Raisely suggests that personalised impact communication can significantly increase donor retention and upgrade rates, though charities should be cautious about attributing causation without robust testing.

The data required to do this well — donor giving history, stated preferences, programme outcome data segmented by project or geography — is typically already held by charities in their CRM and programme systems. The barrier has not been data availability but the staff time required to manually personalise communications. AI removes that barrier.

The ethical considerations here are important. Donors should understand when communications have been AI-assisted, and the personalisation should be grounded in genuine programme data rather than generated fiction. The same principles of accuracy and transparency that apply to funder reporting apply equally to donor communications.

Comparison: Traditional vs AI-Assisted Impact Reporting

DimensionTraditional ApproachAI-Assisted Approach
Data aggregationManual export from multiple systemsAutomated aggregation from connected platforms
Report draftingWritten from scratch each cycleStructured draft generated from data; human review
Funder report formatManually reformatted for each funderTemplate-matched automatically
Reporting frequencyPeriodic (quarterly, annual)Continuous (live dashboards available)
Donor updatesGeneric newsletter or major donor onlyPersonalised at scale
Staff time per report10–40 hours2–8 hours (with human review)
Risk of errorHigh (manual data transfer)Moderate (requires human accuracy check)
Learning valueLow (retrospective snapshot)Higher (live data enables faster adaptation)

Governance and Quality Assurance

The shift to AI-assisted reporting does not reduce the need for governance — it changes its character. Organisations need to establish clear policies about what AI can and cannot do in their reporting processes, who is responsible for reviewing AI-generated content before it is published or submitted, and how discrepancies between AI output and actual programme experience should be handled.

Several practical safeguards are essential. First, any statistic in an AI-generated report must be traceable to a verified data source — AI should never be left to estimate or infer figures. Second, beneficiary stories and case studies included in reports must be drawn from genuine, consented accounts, not generated by AI. Third, AI-generated reports should go through the same sign-off process as manually produced ones — a senior staff member or trustee should review the final version, not just the AI draft.

These safeguards are not burdensome if built into the production workflow from the start. The issue arises when AI is used to accelerate a process that was already poorly governed — in that case, AI speeds up the production of unreliable reports rather than reducing the burden of producing reliable ones.

What This Means for Funders

The AI era creates new expectations for funders as well as grantees. Funders who continue to require bespoke reports in proprietary formats with inconsistent terminology are effectively taxing their grantees' capacity — and the AI tools that could otherwise reduce that burden will instead be used to automate the process of jumping through the same hoops slightly faster.

Forward-looking funders are redesigning their reporting requirements around shared data standards, accepting live dashboard access in lieu of periodic narrative reports, and using the time freed up for more substantive conversations with grantees about strategy and learning. The Association of Charitable Foundations and others have published principles for proportionate reporting that point in this direction.

For grantees, the practical message is to invest in platforms and processes that separate data collection from report production — so that outcome data captured during programme delivery can feed multiple reports and dashboards without being re-entered each time. Platforms like Plinth are designed with this architecture in mind, connecting programme data directly to funder reporting workflows.

FAQ

Is AI-generated impact reporting accurate enough to trust?

AI-generated reports are only as accurate as the data they draw on. If programme data is collected consistently and is stored in a connected system, AI can aggregate and structure it reliably. The narrative layer requires human review — AI can draft text that is coherent and well-structured but may misrepresent nuance or context. Treat AI output as a rigorous first draft, not a finished product.

Do funders accept AI-generated reports?

Most funders do not have explicit policies about AI-generated content in reports, and the overwhelming majority of grant reports submitted today were produced with some form of AI assistance. What funders care about is accuracy and authenticity — a report that accurately reflects what happened and is signed off by accountable humans is acceptable regardless of how it was drafted.

Will AI replace the staff who currently produce impact reports?

No. AI reduces the mechanical work of impact reporting — data aggregation, structural drafting, format matching — but the substantive judgements about what the data means, what the organisation learned, and what it plans to do differently require human expertise that cannot be automated. AI is most usefully understood as a tool that frees up staff time for higher-value work, not one that eliminates the need for skilled communicators.

How much does AI impact reporting software cost?

Costs vary significantly. Some AI reporting features are embedded in grant management platforms like Plinth, which has a free tier available. Standalone AI report-generation tools typically cost between £50 and £500 per month. The ROI case is usually straightforward: if AI saves ten hours of staff time per report and a charity produces ten reports per year, saving 100 staff hours at £20–£30 per hour yields £2,000–£3,000 annually in time savings alone.

What data do I need to have in place before using AI for impact reporting?

The minimum requirement is that programme outcome data is stored in a digital system rather than on paper or in formats that cannot be read by software. Ideally, that system is connected to the reporting tool. Before adopting AI-assisted reporting, most organisations benefit from a data audit — understanding what data they hold, where it lives, and whether it is consistent and reliable enough to serve as the basis for AI-generated reports.

Can AI help with Charity Commission annual returns as well as funder reports?

AI tools can assist with the narrative sections of statutory annual reports and accounts, though the financial and governance disclosures require specialist input and should not be AI-generated without qualified review. The AI-assisted approach works best for the qualitative sections — the activities for public benefit statement, the overview of significant activities, and the future plans section.

Is there a risk that AI makes impact reports less honest?

There is a genuine risk if AI is used to generate reports without adequate human oversight. AI trained on existing charity reports will tend to reproduce the optimistic framing that has characterised much sector reporting. The safeguard is not to avoid AI but to ensure that human reviewers are explicitly responsible for accuracy and that negative results — things that did not work as planned — are included alongside positive ones.

Recommended Next Pages


Last updated: February 2026