Predictive Analytics for Funders: How Data Can Improve Grant Outcomes

How funders can use predictive analytics to forecast grant outcomes, allocate resources effectively, and improve grantmaking decisions responsibly.

By Plinth Team

Predictive analytics is the practice of using historical data, statistical models, and — increasingly — machine learning to estimate what is likely to happen next. For funders, this means moving beyond gut feeling and backward-looking reports towards evidence-based forecasts: which grants are most likely to achieve their intended outcomes, which are at risk of underperformance, and where resources could be targeted more effectively.

This is not a futuristic concept. Open Philanthropy launched a dedicated Forecasting programme in 2024, with grantmaking staff routinely making and tracking probabilistic predictions about grant outcomes (Open Philanthropy, 2024). A 2025 report from Bonterra found that 91% of funders believe AI will positively transform philanthropy and grantmaking within three years (Bonterra, 2025). Yet for most UK foundations, the practical application of predictive analytics remains in its early stages.

The opportunity is significant. Over 14,000 UK grantmakers distributed more than £23 billion in 2023-24, according to the UKGrantmaking project (UKGrantmaking, 2024). With grant volumes of that scale, even modest improvements in allocation accuracy or risk identification could unlock substantial additional impact. The question is not whether data-driven approaches have a role in grantmaking — it is how to implement them responsibly and proportionately.


What Is Predictive Analytics in a Grantmaking Context?

Predictive analytics in grantmaking means using patterns from past grants to estimate future outcomes. At its simplest, this could involve analysing which characteristics of previously funded projects — grant size, organisation type, geography, programme area — correlated with successful delivery. At its most advanced, it uses machine learning models trained on thousands of historical grant records to flag risks and forecast results.

The distinction from standard reporting is directional. Traditional grant reporting tells you what happened. Descriptive analytics tells you why it happened. Predictive analytics estimates what is likely to happen if you make a particular decision. A fourth category, prescriptive analytics, goes further by recommending specific actions.

For most funders, the practical starting point sits somewhere between descriptive and predictive: segmenting your portfolio by risk level, identifying early warning indicators of delivery problems, and forecasting demand for future funding rounds.

Analytics typeQuestion it answersGrantmaking example
DescriptiveWhat happened?73% of grants in our youth programme met targets last year
DiagnosticWhy did it happen?Grants to organisations with prior delivery experience performed better
PredictiveWhat is likely to happen?This applicant has an 82% probability of meeting milestones based on similar past grants
PrescriptiveWhat should we do?Increase monitoring frequency for grants scoring below the risk threshold

PEAK Grantmaking notes that predictive analytics "takes historical information and combines it with data science and math to comment on future events," making it well suited for funders seeking to increase the impact of every pound allocated (PEAK Grantmaking, 2024).


Why Should Funders Care About Predictive Analytics Now?

Three converging trends make predictive analytics increasingly relevant for UK funders.

Rising application volumes are straining capacity. ACF's Foundations in Focus 2025 report found that many UK foundations experienced application increases of 50-60%, with some seeing rises of 100-400%, partly driven by AI tools lowering barriers to applying (ACF/UKGrantmaking, 2025). When application volumes surge, the ability to triage effectively — identifying which proposals are most likely to succeed — becomes a practical necessity rather than a luxury.

Data infrastructure is improving. Over 300 funders now publish their grants data using the 360Giving Data Standard, with more than one million grants searchable through GrantNav (360Giving, 2024). This growing pool of structured, open data creates new possibilities for cross-sector analysis that individual funders could never achieve alone.

The sector is maturing on AI and data use. The Bonterra survey found that while 91% of funders are optimistic about AI in philanthropy, 92% simultaneously expressed concern about data ethics and equity implications (Bonterra, 2025). This healthy scepticism suggests the sector is approaching these tools with appropriate caution — exactly the mindset needed for responsible adoption.

For UK foundations specifically, the Charity Commission's register of over 170,000 charities in England and Wales (Charity Commission, 2025) represents a rich data ecosystem when combined with open grants data, accounts filings, and outcome reporting. The raw material for predictive analytics already exists; the challenge is organising and interpreting it.


What Can Predictive Analytics Actually Do for Funders?

Predictive analytics has several practical applications across the grant lifecycle. The key is starting with questions that matter, not with technology for its own sake.

Forecasting grant completion and outcomes. By analysing which factors correlated with successful delivery in previous rounds, funders can estimate the likelihood that new grants will achieve their milestones. Variables might include the applicant organisation's track record, financial stability, team capacity, and the complexity of what they are proposing.

Identifying risk early. Rather than waiting for a final report to discover that a grant underperformed, predictive models can flag early warning signs — missed interim milestones, changes in key personnel, or financial indicators that suggest cash flow difficulties. This allows funders to offer proactive support rather than retrospective judgement.

Optimising portfolio allocation. Segmenting a portfolio by predicted risk or impact helps funders allocate monitoring resources proportionately. High-confidence grants need lighter oversight; higher-risk grants benefit from closer engagement. IVAR's work on proportionate reporting confirms that right-sizing monitoring to grant risk improves both efficiency and the funder-grantee relationship (IVAR, 2024).

Improving programme design. Analysing patterns across completed grants can reveal whether certain programme structures, grant sizes, or delivery models consistently produce better results. This feeds into strategy: should you fund fewer, larger grants or more, smaller ones? Should you require match funding, or does it create barriers without improving outcomes?

Demand forecasting. Predicting application volumes and characteristics for upcoming funding rounds helps with operational planning — staffing assessments, budget allocation, and communications timing.


What Data Do You Actually Need?

The most common obstacle to predictive analytics is not the lack of sophisticated models — it is the quality and consistency of the underlying data. A survey by Every Action found that 90% of nonprofits collect data, but almost half say they are not fully aware of how data can improve their work (Every Action). On the funder side, PEAK Grantmaking notes that 80% or more of what funders collect from grant applications is unstructured text, which is difficult to analyse systematically (PEAK Grantmaking, 2024).

To build useful predictive models, funders need four categories of data:

1. Application data. Organisation characteristics (size, age, sector, geography), project details (budget, duration, activities), and assessment scores from reviewers.

2. Monitoring data. Progress reports, milestone completion dates, financial returns, and any mid-grant assessments or site visit notes.

3. Outcome data. Whether the grant achieved its stated objectives, using consistent measures applied across a programme or portfolio. This is where standardised outcome frameworks — such as shared outcomes agreed between funders and grantees — become essential.

4. Contextual data. External factors that may influence outcomes: local deprivation indices (from ONS), sector trends, or policy changes affecting the issue area.

The critical requirement is consistency. A model trained on data where "success" is defined differently for every grant will produce unreliable predictions. Funders who use standardised application forms, shared outcome measures, and structured reporting templates are far better positioned for analytics than those relying on bespoke narratives for each grant.


How Do You Start Without a Data Science Team?

Most UK foundations do not have in-house data scientists, and they do not need them to begin. The most impactful first steps are organisational, not technical.

Step 1: Standardise your data collection. Ensure every grant in your portfolio uses the same fields, outcome categories, and reporting formats. If you currently accept free-text reports in varying formats, this is the single highest-value change you can make. Consistent data is a prerequisite for any analytics.

Step 2: Define what "success" means. Before you can predict outcomes, you need a clear, measurable definition of what a good outcome looks like for each programme. This might be milestone completion, beneficiary numbers, evidenced outcome change, or a combination.

Step 3: Start with descriptive analytics. Before predicting the future, understand the past. What percentage of grants in each programme met their targets? Are there patterns by organisation size, geography, or grant amount? Simple cross-tabulations often reveal insights that no one had noticed.

Step 4: Build simple risk models. A basic risk scoring model — even one built in a spreadsheet — that combines a few key indicators (organisation financial health, prior delivery track record, project complexity) can meaningfully improve monitoring decisions without any machine learning.

Step 5: Use technology that does the heavy lifting. Tools like Plinth centralise grant data, standardise reporting, and provide funder dashboards with configurable KPIs and outcome tracking across a portfolio. When your data is already structured and consistent, analytics become a natural next step rather than a separate project. Plinth's funder measures feature lets you define shared outcomes and track them across all funded organisations in real time, giving you the clean, standardised data that predictive models require.

Maturity levelWhat it looks likeTypical tools
Level 1: Ad hocData in spreadsheets, inconsistent formats, manual aggregationExcel, email
Level 2: StructuredStandardised forms and reporting, centralised databaseGrant management software (e.g. Plinth)
Level 3: DescriptivePortfolio dashboards, trend analysis, outcome summariesBI tools, platform dashboards
Level 4: PredictiveRisk scoring, outcome forecasting, demand modellingStatistical tools, ML models
Level 5: PrescriptiveAutomated recommendations, adaptive monitoring, strategic simulationAdvanced analytics platforms

Most funders are at Level 1 or 2. Moving from Level 1 to Level 2 — getting your data standardised and centralised — delivers more value than jumping straight to machine learning.


What Are the Risks and Ethical Considerations?

Predictive analytics in grantmaking carries genuine ethical risks that must be managed actively, not treated as theoretical.

Bias amplification. If historical data reflects biased decision-making — for example, if certain types of organisations or communities were systematically underfunded in the past — a model trained on that data will perpetuate and potentially amplify those patterns. IBM's research on algorithmic bias confirms that models trained on skewed data can entrench discrimination (IBM, 2024). Funders must audit their historical data for systematic biases before using it to train predictive models.

False precision. A model that assigns an "82% success probability" to a grant application creates an illusion of precision that the underlying data may not support. Grant outcomes are influenced by factors that no model can capture: leadership changes, community dynamics, policy shifts, or simple luck. Predictions should always be presented as ranges or risk categories rather than exact figures.

Accountability gaps. When a funder declines an application or intensifies monitoring based on a predictive score, who is accountable for that decision? The answer must always be a person, not an algorithm. The Centre for Data Ethics and Innovation's guidance is clear: AI tools should support human decision-makers, not replace them.

Data burden on grantees. Feeding predictive models requires data, and that data ultimately comes from the organisations you fund. IVAR's Better Reporting principles emphasise that funders should only collect information they will actually use and ensure reporting requirements are proportionate to the grant size (IVAR, 2024). Predictive analytics should reduce reporting burden — by enabling risk-based monitoring — not increase it.

Transparency. If you use predictive scores in your decision-making, applicants and grantees have a right to understand how those scores work. Explainable models (such as logistic regression or decision trees) are preferable to black-box approaches, especially in a sector built on trust.


How Does Predictive Analytics Fit Into Trust-Based Philanthropy?

A legitimate concern is that predictive analytics might pull funders in the opposite direction from trust-based philanthropy, which emphasises relational grantmaking, reduced power imbalances, and lighter-touch reporting.

The two approaches are not inherently contradictory. Done well, predictive analytics can actually support trust-based principles:

  • Reducing reporting burden. If a predictive model identifies a grant as low-risk based on the organisation's track record and early indicators, the funder can justify lighter monitoring — exactly what trust-based approaches advocate. Risk-proportionate monitoring means fewer reporting requirements for organisations that have demonstrated reliability.

  • Improving equity. Analytics can reveal patterns that intuition misses. If data shows that smaller, community-led organisations actually deliver comparable or better outcomes than larger charities, that evidence can challenge unconscious biases in decision-making and redirect resources towards overlooked groups.

  • Supporting, not gatekeeping. The most valuable application of predictive analytics is not in application screening (which risks excluding promising but unconventional proposals) but in post-award monitoring — identifying which grants might benefit from additional support before problems escalate.

The key distinction is purpose. Using predictions to exclude organisations from funding is gatekeeping. Using predictions to allocate support and adjust monitoring intensity is stewardship. Funders should be explicit about which approach they are taking.


What Can We Learn from Funders Already Using Predictive Approaches?

Several organisations offer instructive examples, though full-scale predictive analytics in UK grantmaking remains relatively uncommon.

Open Philanthropy's forecasting programme. Since 2024, Open Philanthropy staff make probabilistic predictions about grant outcomes — for example, "I am 70% confident the grantee will achieve milestone one within one year" — and track their accuracy over time. This approach treats prediction as a skill that improves with practice and feedback, rather than a one-off technical exercise (Open Philanthropy, 2024).

360Giving and cross-sector analysis. With over 300 publishers and one million grants in the open data ecosystem, 360Giving enables analysis that no individual funder could achieve. Researchers can examine which grant characteristics correlate with outcomes across the entire UK philanthropic sector, creating benchmarks that individual funders can use to assess their own portfolios.

Community foundations and local data. Several UK community foundations combine grants data with local deprivation indices, demographic data from ONS, and beneficiary outcome measures to identify underserved areas and allocate resources accordingly. This is a form of predictive analytics — using data to forecast where need is greatest and where interventions are most likely to have impact.

Plinth's funder portfolio tools. Platforms like Plinth give funders real-time dashboards that track KPIs, outcome measures, and programme delivery data across their entire portfolio. The system's shared outcome measures and configurable reporting allow funders to compare performance across funded organisations using consistent data — the foundation on which any predictive approach depends. Plinth also includes AI-powered report generation that can synthesise portfolio data into narrative insights, turning raw numbers into actionable intelligence for boards and trustees.


How Should You Govern Predictive Analytics in Your Organisation?

Adopting predictive analytics requires governance structures that most foundations do not yet have. The following framework provides a starting point.

Establish data ownership and quality standards. Designate someone responsible for data quality across your grant management system. Define which fields are mandatory, how outcomes are categorised, and how data is validated. Without this, any model you build will be unreliable.

Create an ethics review process. Before deploying any predictive model in decision-making, review it for bias, transparency, and proportionality. This does not require a formal ethics committee — a standing agenda item at programme team meetings may suffice — but it does require someone asking the right questions.

Document model assumptions and limitations. Every predictive model makes assumptions. Document what they are, what data the model was trained on, and what it cannot account for. Share this documentation with anyone who uses model outputs in their work.

Set review cycles. Models degrade over time as the world changes. A model trained on pre-pandemic data may not predict post-pandemic outcomes accurately. Set a regular review cycle — annually at minimum — to check whether your model's predictions are still calibrated.

Maintain human override. No predictive score should be the sole basis for a funding decision. Ensure that programme officers can override model recommendations, and record the reasons when they do. These overrides are valuable data in themselves — they tell you where the model is missing something.


Frequently Asked Questions

Do we need a data scientist to use predictive analytics?

Not to start. The most impactful first steps — standardising data collection, defining outcome measures, and conducting basic portfolio analysis — are organisational tasks, not technical ones. Grant management platforms like Plinth handle data structuring and dashboard analytics without requiring specialist skills. If you later want to build custom machine learning models, you may need data science expertise, but most funders can extract substantial value from structured, consistent data analysis alone.

How much historical data do we need before predictive models are useful?

There is no fixed threshold, but generally you need at least 100-200 completed grants with consistent outcome data for a simple model to identify meaningful patterns. Quality matters more than quantity: 150 grants with standardised outcome measures and reliable data will produce better predictions than 1,000 grants where "success" was defined differently each time.

Will predictive analytics replace human judgement in grantmaking?

No. Predictive models identify patterns in historical data, but grant decisions involve contextual factors — community need, strategic priorities, relationship history, innovation potential — that no model can fully capture. The purpose is to inform human decision-makers, not to automate decisions. Open Philanthropy's approach is illustrative: their staff make predictions themselves, treating forecasting as a skill rather than delegating it to algorithms.

What about bias in historical grantmaking data?

This is a genuine and important concern. If your past funding patterns reflected biases — geographical, organisational type, demographic — a model trained on that data will reproduce those biases. Mitigation strategies include auditing training data for representation, testing model outputs across different groups, and using demographic-aware fairness checks. Importantly, awareness of bias in your historical data is itself a valuable insight that should inform your strategy.

Can predictive analytics help with due diligence?

Yes. Predictive risk scoring can complement traditional due diligence checks by flagging applications or grants that warrant closer examination. For example, a model might identify that grants to organisations with certain financial characteristics are more likely to encounter delivery problems, prompting more detailed financial review at the application stage.

Is this relevant for small funders with limited portfolios?

Even small funders can benefit from the principles behind predictive analytics — standardising data, defining outcomes clearly, and looking for patterns in past performance — without needing formal models. If you manage fewer than 50 grants, a structured review of your completed grants using a simple spreadsheet analysis may be more practical and just as insightful as a predictive model.

How does predictive analytics relate to impact measurement?

Predictive analytics and impact measurement are complementary. Impact measurement tells you whether grants achieved their outcomes. Predictive analytics uses that outcome data to forecast which future grants are likely to succeed. Without robust impact measurement producing reliable outcome data, predictive models have nothing to learn from.

Does Plinth support predictive analytics for funders?

Plinth provides the data infrastructure that makes analytics possible: standardised grant tracking, shared outcome measures, real-time KPI dashboards, and AI-powered reporting across your entire portfolio. While Plinth is not a standalone predictive modelling tool, its structured data and funder dashboards give you the clean, consistent dataset that any predictive approach requires. Plinth has a free tier, making it accessible for funders at any stage of their data journey.


Recommended Next Pages


Last updated: February 2026