AI Strategy for Charity Trustees: A Balanced Guide to Opportunities, Risks, and Governance
A practical guide for charity trustees on AI strategy, governance, risks, and responsible adoption. What boards need to know in 2026.
AI has moved from a peripheral topic to a standing item on many charity board agendas. Yet for most trustees, the conversation still feels uncertain. The 2025 refresh of the Charity Governance Code explicitly addressed the need for boards to consider AI and technology risks for the first time, reflecting growing awareness that many trustees feel unclear about their governance responsibilities regarding AI. The gap between the pace of AI adoption in the sector and trustee readiness to oversee it is significant — and growing.
This is not a technology problem. It is a governance problem. Trustees do not need to understand how large language models work or what a neural network does. They need to understand what AI means for their organisation's risk profile, their data responsibilities, their staff, and their beneficiaries. They need to ask the right questions, set appropriate guardrails, and ensure that any AI adoption aligns with the charity's mission and values.
The good news is that this is not uncharted territory. The principles of good governance — duty of care, prudent management, informed decision-making — apply to AI just as they apply to any other strategic decision. What trustees need is a framework for applying those principles to a technology that most of them have not personally used in a professional context.
This guide provides that framework. It covers the opportunities AI creates for charities, the risks trustees must manage, what a proportionate AI governance approach looks like, and how to move from uncertainty to informed confidence — without needing a computer science degree.
What you will learn
- What trustees actually need to know about AI (and what they can safely ignore)
- The key risks AI presents for charities and how to govern them proportionately
- How to develop an AI strategy and policy appropriate to your organisation's size and complexity
- What questions trustees should be asking their executive teams right now
- How other charities are adopting AI in practice
Who this is for
- Charity trustees and board members hearing about AI and wanting a balanced perspective
- Chairs who need to lead board-level conversations about AI strategy
- CEOs and directors preparing AI briefings or proposals for their boards
- Governance professionals advising charity boards on technology adoption
What Do Charity Trustees Actually Need to Know About AI?
Trustees do not need to understand the technical mechanics of AI. They need to understand four things.
First, what AI is being used for in your organisation. This includes both formal tools adopted by the charity and informal use by staff (such as employees using ChatGPT for drafting). The Charity Digital Skills Report 2025 found that 76% of UK charities were using AI tools in their day-to-day work, though many were still in the piloting and exploring stage rather than using AI strategically.
Second, what data is involved. AI systems process data. If that data includes beneficiary information, personal details, safeguarding records, or financial data, trustees need to understand where that data goes, who has access, and whether the charity's data processing complies with UK GDPR and the Data Protection Act 2018. The Information Commissioner's Office (ICO) issued updated AI guidance in 2025 that specifically addresses the voluntary sector.
Third, what decisions AI is influencing. There is a significant difference between using AI to draft a grant application (low risk, human-reviewed output) and using AI to prioritise which beneficiaries receive a service (high risk, potentially affecting vulnerable people). Trustees must understand where AI sits on this spectrum within their organisation.
Fourth, what the staff and beneficiary experience is. AI adoption affects people. Staff may feel threatened, excited, confused, or all three. Beneficiaries may have concerns about how their data is used. Trustees have a duty of care that extends to understanding these human dimensions.
Everything else — transformer architectures, training data pipelines, fine-tuning parameters — is interesting but not essential for governance purposes.
What Are the Real Opportunities AI Creates for Charities?
The opportunities are genuine and increasingly well-evidenced. Research from New Philanthropy Capital (NPC) and the Charity Digital Skills Report paints a consistent picture: AI's primary value in the charity sector is in reducing administrative burden so that staff can focus on mission delivery.
Here are the five most substantiated opportunities:
1. Reducing reporting burden. UK charities that receive grants spend a significant proportion of programme staff time on funder reporting. Plinth's 2025 research estimated that the sector spends 15.8 million hours per year on funder reporting alone. AI tools can generate report drafts from existing data, produce tailored outputs for different funders, and create impact summaries automatically. This does not eliminate reporting — it eliminates the blank-page problem.
2. Improving evidence capture. Frontline workers often struggle to document the impact of their work because they are busy delivering it. AI-assisted tools can turn voice recordings into structured case notes, digitise paper records, and extract data from unstructured sources. This means better evidence with less effort.
3. Strengthening fundraising. AI can help charities draft stronger grant applications by incorporating actual outcome data and programme evidence, personalise donor communications, and identify funding opportunities that match their work. Many small charities lack capacity to find and apply for all relevant funding opportunities, meaning available grants go unclaimed each year.
4. Making services more accessible. AI translation, transcription, and summarisation tools can help charities communicate with beneficiaries in multiple languages, produce easy-read versions of documents, and improve accessibility for people with disabilities. Early evidence from charities using AI accessibility tools suggests they can meaningfully increase service uptake among disabled beneficiaries.
5. Enabling better strategic decisions. AI can analyse patterns in programme data, identify trends in beneficiary needs, and flag emerging issues before they become crises. For larger charities managing complex programmes, this analytical capability supports more informed resource allocation.
The critical point for trustees is that these are not speculative possibilities. They are happening now, in charities across the UK. The question is not whether AI has a role but how to adopt it thoughtfully.
What Are the Risks Trustees Must Understand?
Balanced governance requires understanding risks alongside opportunities. AI introduces several categories of risk that trustees should actively monitor.
| Risk category | Description | Trustee question |
|---|---|---|
| Data protection | AI tools may process personal data outside the charity's direct control, potentially breaching UK GDPR | "Where does our data go when we use this tool, and is our processing lawful?" |
| Bias and fairness | AI systems can reproduce or amplify biases present in training data, potentially leading to unfair treatment | "Could this tool disadvantage any group of beneficiaries or applicants?" |
| Accuracy and hallucination | Generative AI can produce plausible but incorrect information | "What review processes ensure AI outputs are checked before use?" |
| Staff displacement | AI adoption may change roles or reduce the need for certain tasks, creating anxiety or actual job losses | "How are we supporting staff through this transition?" |
| Dependency and vendor lock-in | Over-reliance on a single AI provider creates vulnerability | "What happens if this tool becomes unavailable or changes its terms?" |
| Reputational risk | Public or media perception of a charity using AI, particularly with vulnerable beneficiaries | "How would we explain our use of AI if asked by the media or a beneficiary?" |
| Mission drift | Adopting AI because it is available rather than because it serves the mission | "Does this technology help us achieve our charitable purposes more effectively?" |
None of these risks are reasons to avoid AI. They are reasons to govern it properly. The Charity Commission's guidance on trustee duties (CC3) is clear: trustees must act with reasonable care and skill, and make informed decisions. With AI, this means understanding the risk profile of any tools the charity uses and putting proportionate mitigations in place.
"It's about raising standards in the simplest way possible and encouraging the right kinds of conversations about good governance." — Radojka Miljevic, Chair of the Charity Governance Code steering group
How Should Trustees Approach Data Protection and AI?
Data protection is the risk area where trustees have the clearest legal responsibilities. Under UK GDPR, the charity is the data controller and is responsible for how personal data is processed — even when that processing is done by a third-party AI tool.
The ICO's 2025 guidance on AI and data protection identifies three key areas for organisations to address:
Lawful basis for processing. If your charity feeds beneficiary data into an AI system, you need a lawful basis under Article 6 of UK GDPR. For most charity AI use cases, this will be either legitimate interests (with a documented balancing test) or consent. The ICO has been clear that "it is useful" is not a lawful basis.
Data minimisation. Only the personal data necessary for the specific purpose should be processed by AI tools. If you are using AI to generate a case study summary, do you need to include the beneficiary's full name and address, or would anonymised data serve the purpose? The Data Protection Act 2018 expects you to use the minimum data necessary.
International transfers. Many AI tools process data on servers outside the UK. Since the UK's departure from the EU, data transfers must comply with the UK's international transfer framework. Trustees should ensure that any AI tool the charity uses either processes data within the UK, or has appropriate safeguards (such as standard contractual clauses) for international transfers.
Practical advice for trustees: ask your CEO or data protection lead to produce a simple register of all AI tools the charity uses (including informal staff use), what data each tool processes, where that data goes, and what the lawful basis is. This is not bureaucracy — it is basic compliance. The ICO has the power to fine charities for data protection breaches, and ignorance of how AI tools handle data is not a defence.
What Does a Proportionate AI Policy Look Like?
Every charity using AI should have a written AI policy. This does not need to be a 50-page document. For most charities, 2-4 pages is sufficient. The purpose is to set clear expectations for staff and provide the board with a governance framework.
The Plinth blog has published a useful guide on how to write an AI policy for your charity that covers the practical steps in detail. The essential elements are:
1. Scope. Which AI tools are approved for use, and which are not. This should cover both organisation-provided tools and personal tools used for work purposes.
2. Data rules. What types of data can be input into AI tools, and what cannot. At minimum, safeguarding data, sensitive personal data, and confidential financial information should be restricted.
3. Review requirements. All AI-generated content must be reviewed by a human before use. Specify what "review" means in practice — is it a quick sense-check or a detailed accuracy verification?
4. Transparency. When and how the charity will disclose its use of AI to beneficiaries, funders, and the public.
5. Accountability. Who is responsible for overseeing AI use, monitoring compliance with the policy, and reporting to the board.
6. Review cycle. The policy should be reviewed at least annually, given how quickly the technology and regulatory landscape are evolving. The Charity Commission recommended in its 2025 annual report that boards review technology-related policies at least once a year.
The Charity Digital Skills Report 2025 found that the proportion of charities developing an AI policy had tripled since the previous year, from 16% to 48%. Having a clear policy helps staff feel more confident about using AI tools appropriately. The policy is not about restricting use — it is about enabling responsible use.
How Are Other Charities Approaching AI Governance?
Looking at how peer organisations handle AI governance can be more useful than abstract frameworks. Here are three approaches at different scales:
Small charity (income under £500,000). A youth charity in the Midlands with 8 staff uses AI for three specific tasks: drafting grant applications, generating case study summaries from voice recordings, and producing quarterly funder reports. Their AI policy is one page. It lists the approved tools, states that no safeguarding or sensitive personal data may be input into AI systems, and requires all AI-generated content to be reviewed by a manager before submission. The board receives a brief AI update every six months.
Medium charity (income £500,000 to £5 million). A homelessness charity in London uses AI across multiple operational areas. They have a three-page AI policy, a named AI lead (their operations director), and AI is a standing agenda item at quarterly board meetings. They conducted a data protection impact assessment (DPIA) before adopting their main AI platform. Staff received two hours of AI training, focused on practical use and data protection boundaries. The board reviews the AI policy annually and receives a risk report that includes AI-specific risks.
Large charity (income over £5 million). A national mental health charity has an AI steering group that includes two trustees, the CEO, the head of IT, and the data protection officer. They have a detailed AI ethics framework alongside their operational AI policy. New AI tools require a formal business case, a DPIA, and steering group approval before adoption. They have invested in external AI literacy training for all trustees and publish an annual transparency statement about their AI use.
The common thread across all three is proportionality. The governance should match the scale and complexity of AI use. A charity using AI for two administrative tasks does not need a steering group. A charity embedding AI into service delivery decisions absolutely does.
What Questions Should Trustees Be Asking Right Now?
If you are a trustee who has not yet engaged with AI at board level, the following questions provide a practical starting point. You do not need to ask all of them at once — but you should have answers to most of them within the next 12 months.
About current use:
- Is anyone in our organisation currently using AI tools, formally or informally?
- What data are they inputting into these tools?
- Do we have a written AI policy? If not, when will we have one?
About risk:
- Have we conducted a data protection impact assessment for any AI tools we use?
- Could any of our AI use adversely affect beneficiaries, particularly vulnerable groups?
- What would happen if our AI tools became unavailable tomorrow?
About opportunity:
- Where are our staff spending the most time on repetitive administrative tasks?
- Could AI help us improve our evidence base or reporting capacity?
- Are there AI tools being adopted by peer organisations that we should evaluate?
About governance:
- Who in our organisation is responsible for overseeing AI use?
- How often does AI appear on our board agenda?
- Are trustees confident they understand the organisation's AI risk profile?
According to the Charity Governance Code (updated 2025), boards should ensure they have "the skills, knowledge, and experience to govern effectively in a changing environment." In 2026, that changing environment explicitly includes AI. Trustees who avoid the topic are not being cautious — they are failing to fulfil their governance duty.
"The voluntary sector has become too inward-looking, too introspective, too worried, too anxious." — Sir Stuart Etherington, former CEO of NCVO, speaking at his farewell event in 2019
The same principle applies to AI. The biggest risk for most charities is not adopting something dangerous — it is failing to engage with the technology at all, and falling further behind organisations that use it to serve their beneficiaries more effectively.
How Should Boards Think About AI and Beneficiary Impact?
This is the area where trustee judgement matters most. AI applied to internal administration — drafting text, generating reports, summarising data — carries relatively low risk. AI applied to decisions that directly affect beneficiaries carries much higher risk and demands much greater scrutiny.
The distinction matters because the charity sector increasingly encounters AI tools that promise to help with triage, prioritisation, risk assessment, and resource allocation. A homelessness charity might be offered an AI tool that predicts which individuals are most at risk of rough sleeping. A mental health service might consider AI-assisted screening. A grant-maker might use AI to score applications.
For any AI application that affects who receives a service, how they receive it, or what decisions are made about them, trustees should insist on:
Human oversight. No decision that materially affects a beneficiary should be made by AI alone. The Equality and Human Rights Commission's 2025 guidance on algorithmic decision-making is clear: automated decisions that affect individuals must include meaningful human review.
Bias testing. Before deploying any AI tool that affects beneficiaries, the charity should test whether it produces different outcomes for different demographic groups. Research by the Alan Turing Institute and the Centre for Data Ethics and Innovation has demonstrated that AI systems deployed across public services can produce statistically significant differences in outcomes based on ethnicity, gender, or age.
Transparency with beneficiaries. If AI is used in any part of the process that affects a beneficiary, they should know. This is both an ethical obligation and, in many cases, a legal one under UK GDPR's provisions on automated decision-making.
Regular review. AI systems can drift over time as the underlying data or the tool's algorithms change. Any beneficiary-affecting AI should be reviewed at least annually for accuracy, fairness, and continued fitness for purpose.
Most charities are not yet at the stage of using AI for beneficiary-affecting decisions. But trustees should establish the governance framework now, before the technology arrives on their doorstep.
What Does a Realistic AI Adoption Roadmap Look Like for Charities?
Trustees often ask: "Where should we start?" The answer depends on your charity's size, technical maturity, and risk appetite, but most organisations benefit from a phased approach.
Phase 1: Understand and govern (Months 1-3). Audit current AI use (formal and informal). Develop a written AI policy. Brief the board. Identify one or two low-risk use cases to pilot. This phase costs nothing except time.
Phase 2: Pilot and learn (Months 3-9). Implement 1-2 AI tools for administrative tasks — such as report generation, case note drafting, or grant application support. Measure time savings. Gather staff feedback. Report to the board on outcomes and any issues. Many platforms, including Plinth, offer free tiers that allow piloting without budget commitment.
Phase 3: Embed and expand (Months 9-18). Based on pilot results, expand AI use to additional operational areas. Invest in staff training. Review and update the AI policy. Consider whether AI can improve service delivery (with appropriate safeguards). Formally integrate AI into your digital transformation strategy.
Phase 4: Evaluate and mature (Ongoing). Annual review of AI policy, tools, and outcomes. Board-level AI competency development. Participation in sector learning networks (such as Catalyst's Digital Champions programme or CAST's peer support networks). Share learning with peer organisations.
Experience across the sector consistently shows that charities following a phased approach to technology adoption report better outcomes than those that attempt large-scale implementation without piloting. The same principle applies to AI: start small, learn fast, and scale what works.
Frequently Asked Questions
Do charity trustees have a legal duty regarding AI?
There is no specific AI legislation that applies uniquely to charity trustees (as of early 2026). However, trustees' existing legal duties under the Charities Act 2011 — including the duty to act with reasonable care and skill, to act in the charity's best interests, and to ensure compliance with the law — all apply to decisions about AI adoption. If the charity processes personal data through AI tools, data protection legislation (UK GDPR and the Data Protection Act 2018) creates specific obligations. The Charity Commission's CC3 guidance makes clear that trustees must make informed decisions, which means they cannot simply ignore AI if it is being used in their organisation.
Should our charity ban AI use by staff?
Almost certainly not. Blanket bans are counterproductive for two reasons. First, they are unenforceable — staff will use tools like ChatGPT on personal devices regardless. Second, they prevent the charity from benefiting from genuine productivity improvements. The better approach is a clear policy that specifies which tools are approved, what data can and cannot be input, and what review processes apply. The National Council for Voluntary Organisations (NCVO) advised in 2025 that charities should "enable responsible use rather than prohibit all use."
How much should our charity spend on AI?
For most small and medium charities, the answer is very little — at least initially. Many useful AI tools are available at no cost or on free tiers. The primary investment is staff time: time to develop a policy, time to pilot tools, time to train people, and time to review outputs. Many useful AI tools are available at no cost or on free tiers, and the primary investment is staff time for policy development, piloting, training, and review. Focus on tools that solve specific, measurable problems rather than buying technology in search of a use case.
What if our staff are worried about AI replacing their jobs?
Take these concerns seriously. The World Economic Forum's 2025 Future of Jobs Report found that 41% of employers globally plan to reduce workforces due to AI automation by 2030, and many charity workers share similar concerns. The evidence in the charity sector, however, suggests that AI is primarily augmenting roles rather than replacing them — automating administrative tasks so staff can focus on relationship-based work. Be transparent with staff about your AI plans, involve them in piloting and evaluation, and invest in training. Charities that frame AI as "freeing you from admin" rather than "replacing what you do" consistently report higher staff engagement with the technology.
How do we evaluate whether an AI tool is safe to use?
Apply the same due diligence you would to any supplier or technology purchase, with additional attention to data protection. Key questions: Where is the data processed and stored? Is the provider compliant with UK GDPR? What happens to the data after processing — is it retained, used for training, or deleted? Does the tool produce outputs that are accurate enough for your use case? What are the terms of service? For any tool processing personal data, conduct a Data Protection Impact Assessment (DPIA). The ICO provides free DPIA templates and guidance.
Should AI be a standing agenda item for the board?
It depends on how extensively your charity uses AI. If AI is embedded in multiple operational areas, yes — it should appear in the regular risk and operations reporting. If your charity is in the early stages of exploring AI, a quarterly or six-monthly update is sufficient. What matters more than frequency is quality: the board should receive clear information about what AI tools are in use, what data they process, what benefits have been observed, and what risks or issues have emerged. As the Charity Governance Code notes, boards should regularly review whether they have the information they need to govern effectively.
What should we do if we discover staff are already using AI without approval?
Do not panic, and do not punish. This is common — the 2025 Charity Digital Skills Report showed that while 76% of charities were using AI, only 25% had moved to strategic AI use, meaning much adoption remains informal or exploratory. The immediate step is to understand what tools are being used, what data has been input, and whether any data protection issues have arisen. Then develop or update your AI policy to bring informal use within a governed framework. Treat it as an opportunity to learn what staff find useful, which will inform your formal AI strategy.
How do we stay up to date on AI governance as a board?
Three practical approaches. First, subscribe to sector briefings from organisations like NCVO, the Charity Commission, and Catalyst, which regularly publish AI guidance for charities. Second, include AI as a topic in your annual trustee training or away day — even a 30-minute briefing can significantly improve confidence. Third, connect with peer trustees through networks like the Association of Chairs or Trustees Unlimited, where shared learning is increasingly focused on digital and AI governance. The landscape is evolving quickly, so building ongoing learning into your governance practice matters more than any single training event.
Recommended Next Pages
- AI for Charities: What Actually Works in 2026 — A comprehensive guide to practical AI use cases in the charity sector
- AI-Powered Grant Applications — How AI is changing the grant writing process for charities
- Digital Transformation for Charities: The Complete Guide — The broader context for technology adoption in the charity sector
- How to Write an AI Policy for Your Charity — Step-by-step guide to developing your organisation's AI policy
Last updated: February 2026