How AI Can Support Trust-Based Philanthropy Without Undermining It
Practical ways funders can use AI to reduce burden, improve equity and strengthen grantee relationships while keeping trust, transparency and power-sharing central.
Trust-based philanthropy asks funders to share power, reduce burden, and centre relationships. AI, by contrast, is often associated with automation, efficiency, and scale. On the surface, these two movements in the funding sector look like they are pulling in opposite directions. One says "trust people more"; the other says "let machines do more." The tension is real, but it is also resolvable — and funders who navigate it well will find that AI can become one of the most practical tools for delivering on trust-based commitments.
The Trust-Based Philanthropy Project, founded by the Whitman Institute in 2014, identifies six core practices: give multi-year unrestricted funding, do the homework, simplify and streamline paperwork, be transparent and responsive, solicit and act on feedback, and support beyond the cheque. In the UK, IVAR's Open and Trusting initiative has translated similar principles into eight commitments that over 140 funders have now signed up to, collectively making grants worth over £1 billion in 2023-24 (IVAR, 2025). The direction of travel is clear: the sector wants less burden, more flexibility, and deeper relationships.
The question is not whether AI belongs in this picture, but how to introduce it without recreating the very problems trust-based philanthropy was designed to solve. This guide explores where AI genuinely helps, where it risks doing harm, and what practical safeguards funders should put in place.
What is trust-based philanthropy?
Trust-based philanthropy is an approach to funding that seeks to redistribute power from funders to the organisations they support. Rather than treating grant recipients as vendors who must prove their worth through extensive applications and reports, it positions them as partners whose expertise and judgement should be trusted.
The concept was formalised by the Whitman Institute in San Francisco and has since become a global movement. The Trust-Based Philanthropy Project defines it as "a set of values that help advance equity, shift power and build mutually accountable relationships" (Trust-Based Philanthropy Project). In practice, it translates into six grantmaking practices: providing multi-year unrestricted funding, doing homework before asking applicants to do theirs, simplifying paperwork, being transparent, soliciting feedback, and offering non-financial support.
In the UK, the movement has taken a distinctive shape through IVAR's Open and Trusting initiative. Launched in the wake of COVID-19 — when many funders temporarily relaxed their requirements and discovered that the sky did not fall — it asks funders to commit to eight principles including transparent communication, flexible funding, light-touch reporting, and proportionate processes. Over 1,200 charities report that an open and trusting approach enables them to deliver better for the communities and causes they serve (IVAR Funding Experience Survey).
The underlying logic is straightforward. Charities know their communities better than funders do. Excessive compliance requirements consume resources that should reach beneficiaries. And the power imbalance inherent in the funder-grantee relationship can distort behaviour, encouraging organisations to tell funders what they want to hear rather than what is actually happening.
Why the tension between AI and trust matters
The concern that AI might undermine trust-based philanthropy is not theoretical. It reflects genuine risks that funders need to take seriously before introducing any technology into their grantmaking processes.
The first risk is surveillance. Trust-based philanthropy explicitly calls for lighter monitoring. If AI is used to scrape grantee websites, analyse social media, or cross-reference data from multiple sources, it can create a form of monitoring that is more comprehensive and more intrusive than any paper report — even if no additional reporting burden is placed on the grantee. The grantee may not even know it is happening, which violates the transparency principle at the heart of trust-based practice.
The second risk is bias amplification. AI systems trained on historical grantmaking data will inevitably reflect historical patterns — including the systemic under-funding of organisations led by people from marginalised communities. Research published in Synthese in 2025 examined the methodological and ethical challenges of using AI in grant review, warning that AI tools "reflect and reinforce existing biases in datasets" (Springer, 2025). If an AI model learns from a dataset where larger, more established organisations have historically received more funding, it will tend to rate those types of organisations more favourably — precisely the pattern trust-based philanthropy seeks to break.
The third risk is depersonalisation. Trust-based approaches depend on genuine human relationships. If a funder replaces programme officer conversations with AI-generated feedback, or uses an algorithm to decide which grantees warrant closer engagement, the relational dimension is lost. The efficiency gain comes at the cost of the connection that makes trust possible.
These risks are real, but they are not inherent to AI itself. They are design choices. And the alternative — refusing to use any technology — has its own costs, as the next section explores.
Where AI genuinely reduces burden on grantees
The strongest case for AI in trust-based philanthropy is also the simplest: it can reduce the administrative burden that trust-based approaches have long argued is too high. Research published by Plinth found that UK charities spend an estimated 15.8 million hours per year on funder reporting alone. Small charities with income under £100,000 spend 38% of their total grant income on fundraising applications (UK Fundraising, 2022). One in eight charities report spending three or more working days per week on grant applications (UK Fundraising, 2020).
AI can address this burden in several concrete ways:
Auto-filling applications from existing documents. Rather than requiring an applicant to retype information they have already written for another funder, AI can extract relevant content from previous applications, impact reports, or organisational documents and suggest answers for a new form. The applicant reviews and edits the suggestions rather than starting from scratch. This directly aligns with the trust-based principle of "doing the homework" — the technology does some of the work so the applicant does not have to.
Providing feedback before submission. AI-powered feedback tools can review draft application answers and suggest improvements using a Socratic approach — asking guiding questions like "Could you give a specific example of impact you have achieved with this group?" rather than rewriting the answer. This helps smaller organisations without professional bid writers compete on a more level footing with those that have specialist fundraising staff.
Generating reports from existing data. Instead of requiring grantees to compile bespoke reports, AI can pull together programme data, outcomes, and narrative into a structured report. The grantee reviews and approves the output rather than spending days assembling it from scratch.
The key principle is that AI should serve the grantee, not just the funder. When the technology makes life easier for the people applying for and managing grants, it directly supports trust-based practice. When it only makes life easier for the funder's back office, it may be efficient but it is not trust-based.
How AI can make assessment fairer, not less fair
One of the strongest arguments for using AI in grant assessment is consistency. Human reviewers are subject to fatigue, anchoring bias, halo effects, and unconscious preferences. A reviewer reading the fiftieth application of the day will not give it the same attention as the first. An AI system, by contrast, applies the same criteria to every application with the same level of attention.
This matters for equity. Trust-based philanthropy aims to level the playing field for smaller and minority-led organisations. But if the assessment process itself is inconsistent — if the quality of review depends on which staff member happens to read the application, or what time of day they read it — then the process is unfair regardless of the values behind it.
AI-assisted assessment works best when it is explicitly designed as a support tool, not a decision-maker. The funder defines the criteria. The AI analyses each application against those criteria and provides a structured summary. The programme officer reviews the summary, adds their own judgement, and makes the final decision. At no point does the AI approve or reject an application.
| Approach | Who decides | Role of AI | Risk of bias | Burden on grantee |
|---|---|---|---|---|
| Traditional manual review | Staff or panel | None | Varies with reviewer fatigue, unconscious bias | High — extensive applications |
| AI-only scoring | Algorithm | Final decision | High — encodes historical patterns | Low but impersonal |
| AI-assisted human review | Staff or panel, informed by AI | Summarise, flag, consistency-check | Lower — consistent criteria, human override | Reduced — smarter forms, auto-fill |
| Trust-based with no AI | Staff through conversation | None | Varies — depends on relationship quality | Lowest on paper, highest on staff time |
The third option — AI-assisted human review — is the approach most compatible with trust-based values. It uses technology to handle the mechanical work (reading, summarising, checking for completeness) while preserving human judgement for the relational and contextual work that matters most.
Practical safeguards for ethical AI use in grantmaking
Introducing AI into a trust-based grantmaking process requires deliberate safeguards. The following principles draw on both the Trust-Based Philanthropy Project's values and emerging best practice from funders already using AI tools.
Be transparent about what AI does. Tell applicants and grantees which parts of your process use AI, what data the AI sees, and how its outputs are used. This is not optional in a trust-based framework. Transparency is a core commitment, and it extends to technology. If your AI system reviews applications, say so in your guidance notes. If it generates feedback, label it clearly.
Never let AI make final funding decisions. AI should inform decisions, not make them. The human-in-the-loop principle is essential. Programme officers, panels, and trustees should retain full authority to accept, modify, or reject AI recommendations. This is consistent with IVAR's commitment to meaningful engagement and ACF's position on accountability.
Audit for bias regularly. Run your AI assessments against historical data and check whether they would have systematically disadvantaged particular types of organisations. Look specifically at organisation size, geography, ethnicity of leadership, and topic area. If patterns emerge, investigate and adjust. The Foundation Practice Rating, which assesses 100 UK foundations on diversity, accountability and transparency (Foundation Practice Rating), provides a useful framework for this kind of self-examination.
Give grantees control. AI features should be optional for applicants. Some organisations may prefer to complete forms without AI assistance. Others may not trust it. Respecting that choice is itself a trust-based practice. Similarly, if AI is used to summarise or analyse grantee reports, the grantee should be able to see and respond to the AI's interpretation.
Protect data. AI processing should not involve storing applicant data beyond what is needed for the specific task. Data should not be used to train models. Transmissions should be encrypted. These are baseline data protection requirements, but they are especially important in a trust-based context where grantees need to feel confident that sharing information will not be used against them.
What trust-based funders are actually doing with technology
The movement toward trust-based practice in the UK is accelerating. ACF's Foundations in Focus report shows that UK charitable foundations increased their grant-making to a record £8.24 billion in 2023-24, with spending growing by over 6% in real terms (ACF, 2025). But the same report found that application volumes at many foundations surged by 50-60%, and in some cases by 100-400% — partly driven by AI tools lowering the barrier to applying.
This creates a paradox. Trust-based practice says: make it easy to apply. But if making it easy leads to a flood of applications, the funder's capacity to build relationships with applicants is overwhelmed. The solution is not to make applications harder again, but to use technology on the funder side to manage volume without sacrificing quality of engagement.
Leading trust-based funders are using technology in several ways:
Simplified forms with conditional logic. Rather than asking every applicant every question, forms branch based on grant size, organisational type, or programme area. This is a design decision, not an AI feature, but it reduces burden proportionately.
AI-assisted assessment to handle volume. When application volumes double, funders face a choice: take longer to respond, hire more staff, or use technology to help. AI can summarise applications, flag areas needing follow-up, and provide structured assessments that reviewers can work from rather than reading every application from scratch.
AI-generated feedback to build capacity. Instead of a generic rejection letter, AI can help generate specific, constructive feedback for unsuccessful applicants — explaining what was strong, what was missing, and suggesting how to strengthen future applications. This aligns directly with the trust-based principle of transparency and responsiveness.
Light-touch reporting with AI analysis. Rather than requiring lengthy structured reports, some funders now accept short narrative updates and use AI to extract themes, outcomes, and questions for follow-up conversation. The reporting burden falls but the quality of insight rises.
Tools like Plinth support this approach from both sides. On the applicant side, AI-powered auto-fill lets organisations populate application forms from existing documents, and smart feedback helps them strengthen answers before submission. On the funder side, AI generates structured assessment summaries, checks due diligence documents, and suggests follow-up questions — all of which free programme officers to spend more time on the conversations and relationships that trust-based practice depends on. Plinth's AI assistant, Pippin, provides transparent reasoning with every suggestion, and all AI-generated content is clearly labelled and fully editable. Crucially, funders can enable or disable AI features per form, giving them full control over where automation is and is not appropriate. Plinth also offers a free tier, making these tools accessible to smaller funders beginning their trust-based journey.
How to align AI adoption with trust-based values
For funders who want to introduce AI without compromising their trust-based commitments, the following framework offers a practical path forward.
Step 1: Start with burden reduction for grantees, not efficiency for your team. The first AI features you introduce should make life easier for the people you fund. Auto-fill, smart feedback, and simplified reporting all pass the benefit directly to the grantee. Only after those are in place should you consider AI for your own internal processes.
Step 2: Involve grantees in the design. Ask grantees what parts of your process they find most burdensome. Use that feedback to decide where AI can help. This mirrors the trust-based principle of soliciting and acting on feedback. If grantees say your application form is too long, AI-assisted form design might help you ask fewer, better questions. If they say reporting takes too long, AI-generated reports from existing data might be the answer.
Step 3: Pilot with transparency. When you introduce a new AI feature, tell your grantees what you are doing and why. Invite feedback. Be prepared to adjust. This is not just good practice — it models the vulnerability and power-consciousness that the Trust-Based Philanthropy Project identifies as essential to the approach.
Step 4: Measure burden, not just outcomes. Add "burden on grantee" as a metric alongside your impact data. Track how long applications take to complete, how many questions your forms ask, how much time grantees spend on reporting. If your AI adoption is not reducing these numbers, it is not serving trust-based goals.
Step 5: Review and audit. Set a schedule for reviewing how AI is being used and whether it is producing equitable outcomes. Check whether smaller organisations, newer organisations, or organisations led by people from underrepresented communities are being assessed differently. Adjust your approach based on what you find.
The role of process design — why it matters more than algorithms
It is tempting to focus on the technology when discussing AI in grantmaking. But the most important decisions in trust-based philanthropy are process decisions, not technology decisions. No AI system can compensate for a badly designed grant process.
If your application form asks 40 questions for a £5,000 grant, AI might help applicants fill it in faster — but the proportionality problem remains. If your reporting template demands quarterly data broken down by ward, AI might help grantees compile it — but you are still asking for more than you need. If your assessment criteria implicitly favour organisations with professional fundraising staff, AI will apply those criteria more consistently but the bias is still baked in.
Trust-based process design means asking hard questions before any technology is introduced:
- Do we need this information at application stage, or could we collect it after the grant is awarded?
- Is this question for our benefit or the applicant's?
- Could we accept information the applicant already has, rather than requiring them to create something new?
- Are our criteria genuinely relevant to the outcomes we want, or are they proxies for organisational maturity?
- Are we asking for this data because we will use it, or because we have always asked for it?
The most impactful thing a funder can do is simplify and redesign their process. AI then becomes a tool that makes a good process work even better, rather than a sticking plaster over a bad one. As IVAR's commitments emphasise, testing application forms for clarity, relevance and avoiding repetition is a human design task that technology can support but not replace.
FAQs
Does using AI in grantmaking contradict trust-based philanthropy?
Not necessarily. AI contradicts trust-based values when it is used to increase surveillance, automate decisions, or add complexity. It supports trust-based values when it reduces burden on grantees, improves consistency, and frees staff time for relationship-building. The design choices matter far more than the technology itself.
Can AI help smaller organisations compete for grants?
Yes. AI-powered tools like auto-fill from existing documents and smart feedback on draft answers help level the playing field between organisations with professional bid writers and those without. When all applicants have access to the same AI support, the advantage shifts toward the quality of the work rather than the quality of the application writing.
How do funders ensure AI does not introduce bias into grant assessment?
By using AI to apply clearly defined criteria consistently rather than to make independent judgements, and by auditing outcomes regularly. Funders should check whether AI-assisted assessments produce different outcomes for different types of organisations and investigate any patterns. Human reviewers should always have the final decision.
Should funders tell applicants they are using AI?
Yes. Transparency is a core principle of trust-based philanthropy. Funders should be clear about which parts of their process use AI, what data the AI sees, and how its outputs influence decisions. This builds trust and allows applicants to make informed choices about how they engage.
What is the difference between IVAR's Open and Trusting approach and the Trust-Based Philanthropy Project?
Both share similar values around reducing burden, increasing transparency, and shifting power. The Trust-Based Philanthropy Project originated in the US and focuses on six grantmaking practices. IVAR's Open and Trusting initiative is UK-based, built on eight commitments developed in partnership with charities, and has been adopted by over 150 UK funders. The frameworks are complementary and mutually reinforcing.
Can AI replace the need for programme officer relationships?
No. AI can handle administrative and analytical tasks, but it cannot build trust, understand context, or exercise the kind of nuanced judgement that relationships require. The goal is for AI to reduce the time programme officers spend on paperwork so they can spend more time on conversations.
Is it possible to use AI for reporting without increasing surveillance?
Yes. The key is to use AI to analyse data that grantees choose to share, not to scrape or collect data independently. AI should make reporting easier for the grantee — for example, by generating draft reports from programme data — rather than creating new forms of monitoring that operate without the grantee's knowledge or consent.
What should a small funder do first if they want to adopt trust-based AI practices?
Start with the process, not the technology. Review your application form and reporting requirements for proportionality. Simplify where you can. Then consider AI tools that reduce burden for applicants — auto-fill and smart feedback are good starting points. Platforms like Plinth offer a free tier that makes this accessible even for small foundations.
Recommended next pages
- Reducing the Burden on Grant Applicants — Practical strategies for proportionate grantmaking that complements trust-based approaches
- AI for Funders: The Future of Grantmaking — A broader look at how AI is changing grantmaking across the sector
- Ethical Considerations in AI Grantmaking — Deeper exploration of bias, fairness, and responsible AI use in funding decisions
- How Charities Experience the Application Process — The grantee perspective on what works and what does not
- Human-in-the-Loop Grantmaking — Why keeping humans at the centre of AI-assisted decisions matters
Last updated: February 2026