Data Security in AI-Powered Grant Systems

How funders and charities can protect sensitive applicant data in AI-powered grant systems through encryption, access controls, and privacy-by-design.

By Plinth Team

Grant management systems hold some of the most sensitive data in the charity sector. Application forms routinely contain financial details, safeguarding disclosures, personal addresses, and information about vulnerable beneficiaries. When artificial intelligence is layered on top — to summarise applications, score proposals, or generate reports — the security stakes increase further. AI features process, transform, and sometimes retain data in ways that traditional form-based systems do not.

The question is no longer whether funders should use technology to manage grants. The UK Cyber Security Breaches Survey 2025, published by the Department for Science, Innovation and Technology, found that 30% of charities experienced a cyber security breach or attack in the past twelve months, equating to approximately 61,000 organisations (GOV.UK, 2025). In that environment, spreadsheets shared via email and password-protected ZIP files are not safer than a well-managed cloud platform. They are less safe.

This guide sets out the practical security measures that funders and grant-receiving charities should expect from any AI-powered grant system — and the questions to ask before trusting a platform with sensitive data.


Why Does Grant Data Need Special Protection?

Grant applications contain a concentration of sensitive personal data that few other charity processes match. A single application may include financial accounts, details of beneficiaries with protected characteristics, safeguarding policies, and staff salary information. When funders manage hundreds or thousands of these applications, the aggregate dataset becomes a high-value target.

The Information Commissioner's Office (ICO) publishes data on self-reported breach cases across sectors, including charities. Evidence from the Verizon Data Breach Investigations Report consistently finds that around 74% of all breaches involve a human element rather than malicious third-party attacks — a pattern reflected in the ICO's own findings for the charity sector (ICO data security incident trends). This means that the greatest risk is not a sophisticated hacker — it is a staff member accidentally sharing the wrong file, sending an email to the wrong recipient, or leaving a spreadsheet unsecured.

AI features introduce additional considerations. When an AI model summarises an application or generates a funder report, the data passes through a processing pipeline. Organisations must understand where that processing happens, whether any data is retained after processing, and whether it could be used to train models that other organisations can access. These are not theoretical concerns. The European Data Protection Board's Opinion 28/2024 specifically addressed whether AI models trained on personal data can ever be considered truly anonymous, concluding that advanced techniques such as model inversion can re-identify individuals even from aggregated data (EDPB, 2024).

What Encryption Standards Should You Expect?

Every AI-powered grant system should encrypt data both at rest (when stored) and in transit (when moving between servers, browsers, and APIs). These are not optional extras — they are baseline requirements.

The current industry standards are AES-256 for data at rest and TLS 1.3 for data in transit. AES-256 is a symmetric encryption algorithm approved for use by the US National Institute of Standards and Technology (NIST) and is the default for major cloud providers including Google Cloud, AWS, and Microsoft Azure. TLS 1.3, adopted as the standard by the Internet Engineering Task Force in 2018, removes outdated cipher suites and enforces forward secrecy, meaning that even if an encryption key is compromised in the future, previously intercepted data cannot be decrypted.

Security measureMinimum standardWhat to ask your vendor
Encryption at restAES-256Is all stored data encrypted, including backups?
Encryption in transitTLS 1.3Are all API calls and browser connections encrypted?
Key managementHardware Security Modules (HSMs)Who controls the encryption keys?
Data residencyUK or EEA hostingWhere are servers and backups physically located?
Backup encryptionAES-256 with separate keysAre backups encrypted independently of production data?
Database-level encryptionTransparent Data EncryptionIs encryption applied at the database layer?

Platforms built on Google Cloud infrastructure, for example, benefit from Google's default encryption which applies AES-256 to all data at rest and TLS 1.2+ for data in transit (Google Cloud, 2025). Firebase, the backend used by several grant management platforms including Plinth, has completed ISO 27001, SOC 1, SOC 2, and SOC 3 evaluations (Firebase, 2025).

How Do Access Controls Prevent Unauthorised Data Exposure?

Encryption protects data from external attackers, but the ICO's own data shows that most charity data incidents involve insiders — whether through error or misuse. Access controls address this internal risk by ensuring that each user can only see and do what their role requires. This is known as the principle of least privilege.

A well-designed grant management system should enforce role-based access at multiple levels. A funder administrator should have different permissions from a grant assessor, who should have different permissions from an external reviewer brought in for a specific funding round. The UK Cyber Security Breaches Survey 2025 found that only 9% of charities review the cybersecurity practices of their immediate suppliers (GOV.UK, 2025), which means that access granted to third parties is rarely scrutinised.

Key access control features to look for include:

  • Organisation-level permissions that tie each user to a specific organisation, preventing cross-organisation data leakage
  • Programme-level scoping that limits assessors to viewing only the applications assigned to them
  • Time-limited accounts for external reviewers that automatically expire after the assessment period
  • Multi-factor authentication (MFA) enforced for all users, not offered as an optional extra
  • API key restrictions that limit what automated integrations can access

Tools like Plinth implement granular authorisation checks at every API endpoint — verifying not only that a user is authenticated, but that they hold the correct organisational role for the specific data they are requesting. External assessors, for example, can only access the grant awards explicitly assigned to them, and funder programme administrators are validated against both their organisation and the specific programme they manage.

What Should an AI Audit Trail Record?

Audit trails are essential for accountability in grant management. When AI is involved, the scope of what needs to be logged expands significantly. It is no longer sufficient to record who logged in and when. Organisations need to know what the AI was asked, what data it processed, and what output it produced.

A comprehensive AI audit trail should capture:

  • Who triggered the AI action — the authenticated user who initiated the request
  • What data was sent to the AI model — the input prompt and any applicant data included
  • What the AI returned — the full output, including scores, summaries, or recommendations
  • Which model was used — the specific AI model and version, to support reproducibility
  • When it happened — timestamps for both the request and response
  • What action was taken — whether the user accepted, modified, or rejected the AI output

This level of logging matters for two reasons. First, it supports regulatory compliance. If an applicant exercises their right to request information about automated decision-making under Article 22 of UK GDPR, the funder needs to be able to explain what happened. Second, it builds institutional learning. Funders can review AI outputs over time to identify patterns of bias, inconsistency, or drift.

Plinth logs all AI interactions including prompts, outputs, and the user who triggered them. AI-generated case note audits, for instance, record the criteria used, the score produced, and the full feedback — creating a verifiable record that supervisors can review.

How Should AI Systems Handle Personal Data?

The most important principle for AI in grant management is data minimisation: only process the personal data that is strictly necessary for the task. An AI feature that summarises an application does not need the applicant's home address. A scoring model does not need to know the names of individual beneficiaries.

Three rules should govern how any AI-powered grant system handles personal data:

1. No training on applicant data. Third-party AI models — whether from OpenAI, Google, or Anthropic — must not use applicant data to improve their models. This should be confirmed contractually and technically. Enterprise API agreements from major providers typically exclude customer data from training, but this must be verified in writing.

2. Data stays in region. For UK funders, AI processing should occur within UK or European Economic Area data centres. This is not merely a regulatory preference — it is a practical safeguard against jurisdictional complications. Plinth, for example, runs its AI processing through Google Vertex AI hosted in the europe-west1 region, ensuring that data does not leave European infrastructure.

3. Explicit opt-in for AI features. Organisations should be able to enable or disable AI features at the account level. This respects the varying risk appetites of different funders and the specific terms of their data processing agreements. Plinth requires organisations to sign a dedicated AI Data Processing Agreement before any AI features are activated, and provides admin-level controls to opt out at any time.

The Data (Use and Access) Act 2025 reinforces the importance of these controls. While the Act introduces some welcome flexibility — including recognised legitimate interests for safeguarding and a new charitable "soft opt-in" for marketing — it does not weaken the core requirement for lawful, transparent processing of personal data (ICO, 2025).

What Does GDPR Compliance Look Like in Practice?

GDPR compliance in grantmaking is not a one-off checkbox exercise. It is an ongoing operational commitment that touches every part of the grant lifecycle — from the privacy notice on the application form to the retention schedule for closed grants.

For AI-powered systems specifically, compliance requires attention to several areas:

Lawful basis for processing. Most grant management processing relies on legitimate interests or contractual necessity. AI features may require a separate assessment. The EDPB's Opinion 28/2024 clarified that controllers deploying third-party large language models must conduct comprehensive legitimate interest assessments, balancing the benefits of AI processing against the rights of data subjects (EDPB, 2024).

Data Processing Agreements (DPAs). Any platform that processes personal data on behalf of a funder must have a DPA in place. This is a legal requirement under Article 28 of UK GDPR. The DPA should specify what data is processed, for what purpose, how long it is retained, and what happens when the contract ends. It should also cover sub-processors — the third-party services that the platform itself relies on.

Subject access requests. Applicants have the right to request copies of their personal data, including any AI-generated outputs about their application. Systems must be able to locate and export this data within the statutory one-month timeframe.

Data retention and deletion. Grant data should not be kept indefinitely. A clear retention schedule — typically aligned with audit requirements from funders such as the National Lottery Community Fund or Arts Council England — should govern when data is archived and when it is permanently deleted. Applicants should be able to request deletion of their data, subject to any overriding legal retention obligations.

Privacy notices. Application forms should clearly explain how AI is used in the assessment process, what data the AI accesses, and how applicants can query or appeal AI-assisted decisions.

How Should You Evaluate a Vendor's Security Posture?

Choosing a grant management platform is, at its core, a data security decision. The platform will hold your applicants' most sensitive information, and the vendor's security practices will determine how well that information is protected.

The National Cyber Security Centre's (NCSC) 14 Cloud Security Principles provide a useful framework for evaluating vendors. These principles cover data-in-transit protection, asset protection, separation between customers, governance, operational security, personnel security, secure development, supply chain security, secure user management, identity and authentication, external interface protection, secure service administration, audit information, and secure use of the service (NCSC, 2025).

In practical terms, the minimum you should expect from any vendor includes:

  • ISO 27001 certification — the international standard for information security management systems
  • SOC 2 Type II report — an independent audit of security controls over a sustained period
  • Regular penetration testing — with results shared or summarised for customers
  • A documented breach notification policy — committing to notify affected organisations within a specific timeframe (72 hours is the UK GDPR standard for notifying the ICO)
  • Data portability guarantees — the ability to export all your data in a standard format if you leave the platform
  • A clear sub-processor list — identifying every third party that has access to your data
Evaluation criterionStrong indicatorWarning sign
CertificationsISO 27001, SOC 2, Cyber Essentials PlusNo certifications or "in progress" for years
Penetration testingAnnual third-party tests, summary sharedNo testing or internal-only testing
Breach policyDocumented, with specific timelinesNo written policy or vague commitments
Data residencyUK or EEA, specified in contract"Multi-region" with no ability to choose
Sub-processor transparencyPublished list, updated regularlyRefuses to disclose third parties
Exit planFull data export in standard formatsProprietary formats or export fees

What Are the Risks of Getting Security Wrong?

The consequences of a data breach in grant management extend well beyond regulatory fines. While the ICO can impose penalties of up to GBP 17.5 million or 4% of global turnover under UK GDPR, the more common and more damaging consequences for charities and funders are reputational.

The ICO fined a YMCA branch GBP 7,500 for a data breach that revealed sensitive health data of people living with HIV (Civil Society, 2024). The financial penalty was modest, but the reputational damage — and the harm to the individuals affected — was severe and lasting.

For funders specifically, the risks include:

  • Loss of applicant trust. If applicants learn that their data was exposed, they may refuse to apply for future rounds — or discourage others from applying
  • Regulatory scrutiny. A breach triggers mandatory notification to the ICO and, in many cases, to the affected individuals. This consumes significant staff time and legal resource
  • Contractual liability. Funders that distribute public money — from local authorities, government departments, or the National Lottery — may face contractual consequences if they fail to protect data adequately
  • Operational disruption. The UK Cyber Security Breaches Survey 2025 found that among charities experiencing breaches, phishing was the root cause in 86% of cases. Phishing attacks that compromise email accounts can disrupt grant administration for days or weeks

Investing in a secure platform is not a cost — it is insurance against consequences that can be existential for small organisations. Understanding the cost of non-compliance in more detail can help boards and trustees make informed decisions about technology investment.

How Can Funders Build a Security-First Culture?

Technology alone does not prevent breaches. The UK Cyber Security Breaches Survey 2025 found that additional staff training was the most common measure adopted after a breach, with 38% of charities taking this step (GOV.UK, 2025). The implication is clear: many organisations only invest in training after something goes wrong.

A security-first culture means:

  • Regular training for all staff who handle grant data, covering phishing awareness, password hygiene, and proper data handling procedures
  • Clear policies on which devices can access the grant system, whether personal devices are permitted, and how data should be handled when working remotely
  • Incident response planning that specifies who does what when a breach is suspected — before it happens, not during the crisis
  • Board-level accountability for data security, with regular reporting to trustees on security posture and incidents
  • Testing and simulation including tabletop exercises that walk through breach scenarios

The NCSC offers free resources specifically designed for charities, including its Small Organisation Security Guide and Cyber Essentials certification scheme. Cyber Essentials covers five key controls — firewalls, secure configuration, user access control, malware protection, and security update management — and provides a cost-effective baseline for organisations of any size.

Tools like Plinth reduce the human-error risk by centralising data in a single platform with enforced authentication, rather than allowing sensitive grant data to be scattered across email inboxes, shared drives, and local spreadsheets. When data lives in one place with proper access controls, there are fewer opportunities for accidental exposure.

What Should You Look for When AI Is Involved?

AI introduces specific security considerations beyond those of traditional grant software. Here is a practical checklist for evaluating AI features in any grant platform:

  • Where does AI processing happen? Confirm that data is processed within UK or EEA data centres. Ask whether any data is sent to servers outside this jurisdiction, even temporarily.
  • Is applicant data used for model training? This must be contractually excluded. Enterprise AI APIs from providers like Google and Anthropic offer data processing terms that explicitly prevent training on customer data, but verify this for each provider the platform uses.
  • What data does the AI see? A well-designed system sends only the minimum necessary data to the AI model. If an AI is scoring an application against published criteria, it does not need the applicant's bank details.
  • Can AI features be disabled? Organisations should have the option to turn off AI features entirely without losing access to the rest of the platform.
  • Are AI outputs logged? Every AI-generated summary, score, or recommendation should be recorded alongside the human decision that followed it.
  • Is there a human in the loop? AI should assist and inform grant decisions, not make them autonomously. Human-in-the-loop grantmaking ensures that AI outputs are reviewed by qualified staff before any action is taken on an application.

The combination of strong technical controls and clear governance policies creates a system where AI can add genuine value — reducing administrative burden and improving consistency — without compromising the security and privacy that applicants and beneficiaries have a right to expect.

FAQs

Is cloud-based grant software more secure than on-premise?

In most cases, yes. Major cloud providers like Google Cloud and AWS invest billions in security infrastructure, employ dedicated security teams around the clock, and maintain certifications such as ISO 27001 and SOC 2. Few individual charities or foundations can match this level of protection with on-premise infrastructure. The NCSC recommends cloud services for most organisations, provided that appropriate due diligence is carried out on the provider.

Can applicants request that their data be deleted from an AI-powered system?

Yes. Under UK GDPR, individuals have the right to request erasure of their personal data (the "right to be forgotten"), subject to certain exemptions such as legal obligations to retain records. A compliant system must be able to locate all instances of an individual's data — including any AI-generated outputs that reference it — and delete them within the statutory timeframe.

How do external grant reviewers access data securely?

Best practice is to use time-limited accounts with multi-factor authentication, scoped to only the applications assigned to that reviewer. Access should be revoked automatically at the end of the assessment period. The reviewer should never need to download application data to their personal device — all review activity should happen within the platform.

Does UK GDPR apply differently when AI is involved in grant decisions?

The core principles of UK GDPR apply regardless of whether decisions are made manually or with AI assistance. However, Article 22 provides additional protections specifically for automated decision-making that produces legal or similarly significant effects. If AI is used to score or rank applications, applicants have the right to request human intervention, express their point of view, and contest the decision.

What certifications should a grant management platform hold?

At minimum, look for ISO 27001 (information security management), SOC 2 Type II (independent security audit), and Cyber Essentials (the UK government-backed baseline scheme). If the platform processes health data or works with local authority commissioners, additional certifications such as NHS Data Security and Protection Toolkit compliance may be relevant.

What happens if a grant platform suffers a data breach?

The platform vendor is required to notify you without undue delay, and you as the data controller are required to notify the ICO within 72 hours if the breach poses a risk to individuals' rights and freedoms. You must also notify the affected individuals directly if the breach is likely to result in a high risk to them. A good vendor will have a documented incident response process and will support you through the notification process.

Should AI-generated grant summaries be shared with applicants?

There is no legal requirement to proactively share AI-generated summaries, but transparency builds trust. If an applicant makes a subject access request, any AI-generated content that constitutes their personal data must be disclosed. Many funders are now choosing to be upfront about AI use in their assessment process as a matter of good practice.

How often should a funder review its data security arrangements?

At least annually, and whenever there is a significant change — such as adopting a new platform, introducing AI features, or changing the types of data collected. The review should cover technical controls, access permissions, staff training, and the currency of data processing agreements with all vendors.

Recommended Next Pages


Last updated: February 2026