How Funders Use AI to Detect Fraud Risks in Grantmaking

How AI helps funders detect fraud risks through anomaly detection, reputation checks and duplicate flagging — while keeping processes fair and proportionate.

By Plinth Team

Fraud in the charity sector is not rare, but it is widely underestimated. The Charity Commission opened 603 cases relating to fraud and a further 99 cases relating to cyber crime between November 2023 and October 2024 (Charity Commission, 2024). BDO's 2025 Charity Fraud Report found that 34% of charities reported fraud or attempted fraud in the previous twelve months — a five-year low, but still affecting roughly one in three organisations (BDO, 2025). For funders distributing public or philanthropic money, the question is no longer whether fraud exists in the grant ecosystem, but how to detect it early and proportionately without penalising honest applicants.

Traditional fraud checks rely on manual cross-referencing: checking registration numbers against the Charity Commission register, reading bank details against previous records, and comparing narrative answers by eye. These approaches work, but they do not scale. When a funder receives hundreds or thousands of applications per round, manual review misses patterns that only become visible across the full dataset — duplicate bank accounts used by different applicants, identical paragraphs copied between unrelated proposals, or spending profiles that diverge sharply from milestones.

AI changes this by analysing applications at volume, flagging anomalies for human review rather than making decisions itself. The key distinction is that AI generates signals, not verdicts. A flag means "this warrants a closer look," not "this applicant is fraudulent." That distinction — between detection and judgement — is what makes AI-assisted fraud prevention both effective and fair.

Why Is Fraud Detection Becoming More Urgent for Funders?

Three trends are converging to make fraud detection a pressing priority for every funder, regardless of size.

First, the regulatory environment is tightening. The Economic Crime and Corporate Transparency Act 2023 introduced a "failure to prevent fraud" offence that came into force on 1 September 2025. While the offence currently applies only to large organisations — those meeting two of three thresholds: over 250 employees, over £36 million turnover, or over £18 million in total assets — the Charity Commission has signalled that all charities should treat fraud prevention as a governance priority (Charity Commission, 2024). Larger charitable incorporated organisations, Royal Charter bodies and charitable companies with subsidiary trading arms are directly in scope.

Second, the volume and speed of grantmaking has increased. The COVID-19 pandemic demonstrated what happens when funds are distributed quickly without adequate controls. The UK government's Cabinet Office reported that £480 million was reclaimed between April 2024 and April 2025, much of it from fraudulent claims made against pandemic support schemes (TechRadar, 2025). Funders who accelerated their processes during the pandemic now need controls that match the pace.

Third, fraud methods are becoming more sophisticated. Over half (52%) of charities surveyed by BDO expect the threat of fraud to increase by 2026, driven partly by AI-generated applications that can produce convincing but entirely fabricated proposals at scale.

What Types of Fraud Can AI Detect in Grant Applications?

AI is not a single tool but a set of techniques, each suited to a different class of risk. The most relevant for grantmaking fall into four categories.

Duplicate detection identifies applications that share bank details, addresses, contact information, or substantial narrative content with other applications — either in the same round or across historical data. This catches both deliberate double-dipping and innocent errors where organisations submit multiple applications without realising a partner has already applied.

Anomaly detection flags statistical outliers: a budget that is three standard deviations above the mean for similar projects, a spend profile that changes dramatically between reporting periods, or an organisation whose registered income on the Charity Commission register does not align with what they have declared on their application.

Narrative analysis uses natural language processing to identify copied or templated text. If two applications from unrelated organisations contain identical paragraphs, that is a signal worth investigating. It can also flag vague or unverifiable claims — language that sounds impressive but lacks specificity.

Reputation and adverse media checks search public sources for controversies, legal proceedings, or regulatory actions associated with an applicant organisation or its key individuals. This goes beyond checking a registration number against a register; it asks whether there is publicly available information that a reviewer should be aware of before making a decision.

Fraud typeWhat AI looks forExample signalHuman action required
Duplicate applicationsShared bank details, addresses, narrativesTwo applications from different organisations list the same bank accountVerify whether a legitimate fiscal sponsorship arrangement exists
Budget anomaliesStatistical outliers against similar projectsRequested amount is 400% above the median for comparable programmesCheck whether the scope genuinely justifies the budget
Copied narrativesIdentical or near-identical text blocksThree paragraphs match a previously rejected application word-for-wordConfirm whether the text reflects genuine shared work or was plagiarised
Registration mismatchesDiscrepancies between declared and register dataDeclared income of £500,000 but Charity Commission filing shows £80,000Request clarification and updated accounts
Adverse mediaPublic reports of misconduct or legal actionNews articles describing financial mismanagement at the applicant organisationReview the evidence and assess whether the risk is current or historical
Spending anomaliesDivergence between milestones and actual expenditure90% of the grant spent in the first quarter of a twelve-month projectRequest a breakdown and assess whether milestones were genuinely met

How Does AI-Powered Reputation Checking Work?

Reputation checks are one of the most time-consuming parts of due diligence. Traditionally, a grants officer might spend 20 to 30 minutes per applicant searching the web, reading news articles, and checking regulatory databases. Multiply that by 200 applications and the task consumes weeks of staff time.

AI-powered reputation checking automates the search and analysis stages while keeping the judgement with a human reviewer. The process typically works in stages. First, the system generates targeted search queries based on the applicant's name, registration details, and key individuals — focusing on terms associated with controversies, legal issues, financial impropriety, and regulatory actions. Second, it retrieves and reads the content of search results, filtering out irrelevant pages. Third, it assesses the relevance of each result to the specific applicant — distinguishing between, say, two charities with similar names — and assigns a risk level: no risk, low, medium, high, or unknown.

The output is a structured report summarising what was found, with source links for every flagged item. An officer can review the report in minutes rather than conducting the search from scratch. Critically, the system does not make a pass/fail decision. It presents evidence and lets the reviewer exercise professional judgement.

This approach addresses a common weakness of manual checking: inconsistency. When ten different officers conduct reputation checks, they use different search terms, check different sources, and apply different thresholds. An AI-assisted process applies the same methodology to every applicant, creating a consistent and auditable baseline.

What Signals Should Funders Monitor — and What Should They Ignore?

Not every anomaly indicates fraud. A mismatch between an application and register data could reflect a genuine administrative delay — the Charity Commission's register is not updated in real time, and smaller charities often file late. A high budget could reflect genuine ambition rather than inflation. Identical narrative text could appear because two organisations are part of a consortium and submitted a shared theory of change.

The purpose of AI-generated signals is to focus human attention, not to replace it. A useful framework distinguishes between three categories.

High-priority signals warrant immediate investigation before any award is made. These include: duplicate bank details across unrelated applications, applicants or key individuals appearing on sanctions lists (such as the OFSI consolidated list), and evidence of active regulatory investigations or legal proceedings.

Medium-priority signals should be investigated during the assessment process but do not necessarily delay a decision. These include: budget outliers, registration data mismatches, and copied narrative content.

Low-priority signals should be recorded for pattern analysis but do not require individual investigation. These include: minor variations in registered addresses, slight discrepancies in reported figures that fall within normal rounding, and common phrases that appear across many applications because they describe standard sector practice.

The risk of over-reliance on AI signals is real. If funders treat every flag as a reason to reject or delay, they create a system that disproportionately burdens smaller organisations with less administrative capacity — precisely the groups that many funders exist to support. The goal is proportionate response, not zero risk.

How Do Duplicate Detection Systems Work in Practice?

Duplicate detection is one of the most immediately practical applications of AI in fraud prevention. The principle is straightforward: when a new application arrives, the system compares it against all existing applications and historical records to identify potential overlaps.

The comparison operates across multiple fields. Name matching uses fuzzy logic to catch variations — "St. Mary's Community Trust" and "Saint Marys Community Trust" should match. Bank detail comparison flags shared accounts. Address matching identifies applications from the same physical location. Email and phone matching catches shared contact details. Narrative comparison measures text similarity at the paragraph level.

The sophistication lies in scoring. A system that flags every partial match would generate an unmanageable number of alerts. Instead, matches are scored based on the number and type of matching fields. Two applications sharing a name and bank account score much higher than two applications sharing only a postcode. Thresholds can be configured to match the funder's risk appetite — a small community foundation distributing £50,000 might accept a higher threshold than a government department distributing £50 million.

When a potential duplicate is identified, the system presents both records side by side so an officer can determine whether they represent a genuine duplicate, a legitimate linked application (such as a consortium), or a false positive. The officer's decision is recorded, creating an audit trail that demonstrates proportionate investigation.

For funders managing large portfolios, duplicate detection also works across time. An organisation that was rejected for misrepresentation in a previous round should be flagged if it reapplies — but only flagged, not automatically rejected. Circumstances change, and a previous issue may have been resolved.

What Does Proportionate Fraud Prevention Look Like?

Proportionate fraud prevention means matching the level of scrutiny to the level of risk. A £2,000 award to a well-known local charity that has been funded successfully for five years does not require the same depth of checking as a £200,000 award to an organisation applying for the first time.

The Charity Commission's updated fraud guidance, published during Charity Fraud Awareness Week in November 2024, emphasises this principle. Trustees have a duty to protect charitable resources, but that duty must be balanced against the cost and burden of the controls themselves. A disproportionate checking regime wastes resources that could be distributed to beneficiaries and creates barriers that deter legitimate applicants.

AI supports proportionality by enabling tiered checking. Low-value, low-risk applications can receive automated checks — registration verification, sanctions screening, and basic duplicate detection — with results reviewed in bulk. Higher-value or higher-risk applications trigger additional checks: reputation screening, detailed budget analysis, and narrative comparison. The highest-risk cases receive full manual investigation supported by AI-generated briefings.

Risk tierGrant valueApplicant historyChecks appliedReview method
LowUnder £10,000Known and previously fundedRegistration verification, sanctions screening, duplicate checkAutomated with bulk review
Medium£10,000–£100,000Known but not recently funded, or first-time applicantAll low-tier checks plus budget analysis, narrative comparison, and Charity Commission data cross-referenceIndividual officer review of AI-generated summary
HighOver £100,000 or any flagged applicationFirst-time applicant, flagged signals, or complex structureAll medium-tier checks plus reputation screening, adverse media search, and financial document analysisSenior officer review with AI-generated briefing

This tiered approach means that the majority of applications pass through quickly, resources are concentrated where risk is highest, and every applicant receives a consistent baseline of checking. It also means that the funder can demonstrate to regulators and stakeholders that they have reasonable fraud prevention procedures in place — an increasingly important consideration under the Economic Crime and Corporate Transparency Act.

How Should Funders Handle False Positives?

False positives — legitimate applications flagged as potentially fraudulent — are the biggest operational risk of any fraud detection system. In financial services, traditional rule-based fraud detection systems generate false positive rates of 90% or higher, meaning nine out of ten alerts are innocent transactions (Softjourn, 2024). AI-based systems significantly reduce this, but false positives will still occur.

The consequences of poorly managed false positives are serious. An applicant flagged for investigation faces delays, additional requests for information, and potential stigma. If the funder's process lacks transparency, the applicant may never know why their application stalled — or may assume discrimination. For funders committed to equity and trust-based approaches, this is a genuine tension.

Best practice for managing false positives includes several principles. First, never auto-reject on signals alone. Every flag should be reviewed by a human before any action is taken. Second, give applicants the opportunity to clarify. If a signal relates to a data mismatch, ask the applicant to explain before drawing conclusions. Many mismatches have innocent explanations. Third, track false positive rates over time. If a particular signal generates alerts in 50 cases but only 2 turn out to be genuine concerns, the threshold needs adjusting. Fourth, make the process transparent. Applicants should know that automated checks are part of the process, what types of checks are run, and how they can raise concerns.

Recording the outcome of every investigation — whether the flag was confirmed, unconfirmed, or a false positive — creates a feedback loop that improves the system over time. Early iterations of any fraud detection system will generate more noise than signal. The value compounds as the system learns which patterns are genuinely predictive and which are not.

How Can Funders Share Risk Intelligence Responsibly?

Fraud rarely targets a single funder. Organised fraud operations submit applications to multiple funders simultaneously, and an organisation rejected by one funder for misrepresentation may apply to another the following week with no record of the previous issue. Responsible information sharing between funders can break this pattern — but it must be done lawfully and fairly.

The legal framework for sharing risk intelligence in the UK is established but constrained. UK GDPR permits data sharing where there is a legitimate interest, provided the sharing is necessary, proportionate, and the rights of data subjects are considered. In practice, this means funders can share risk signals — such as confirmed fraud cases and patterns of concern — but should avoid sharing unverified suspicions or personal data beyond what is necessary.

Several models exist for responsible collaboration. Prevent Charity Fraud, a sector initiative, maintains resources and guidance for charities and funders. Some funder networks operate shared red-flag libraries: anonymised descriptions of fraud patterns and methods that help members recognise similar approaches. Others share information about confirmed cases through formal data-sharing agreements with clear governance, retention periods, and subject access provisions.

AI supports this collaboration by enabling pattern matching across datasets without necessarily sharing raw applicant data. Two funders could, in principle, compare application fingerprints — hashed representations of key fields — to identify overlaps without either funder seeing the other's applicant details. This approach is technically feasible but requires trust, governance, and agreed standards that the sector is still developing.

The key principle is that shared intelligence should improve protection for charitable resources and the public, not create a blacklist that prevents organisations from ever accessing funding after a single flag. Due process matters as much in funder collaboration as it does in individual decision-making.

How Does Plinth Support Fraud Detection for Funders?

Tools like Plinth bring several of these capabilities together in a single grant management platform, making AI-assisted fraud detection accessible to funders of all sizes — not just those with dedicated fraud teams.

Plinth's AI-powered reputation checking searches public sources for reputational risks associated with applicant organisations and key individuals. The system generates targeted search queries, retrieves and analyses web content, assesses relevance to the specific applicant, and produces a structured risk report with source links. Each result is categorised by risk level — no risk, low, medium, high, or unknown — and the overall assessment is summarised for the reviewing officer. The entire process is recorded, creating a complete audit trail of what was searched, what was found, and how it was assessed.

For duplicate detection, Plinth's built-in matching compares new records against existing data using configurable fields including name, email, phone, date of birth, and postcode. Matches are scored and presented to the officer with both records side by side, so they can make an informed decision about whether a genuine duplicate exists. The matching sensitivity can be configured to reflect the funder's risk appetite and the characteristics of their applicant base.

These capabilities sit within Plinth's broader grant management workflow, which includes automated registration checks against the Charity Commission and Companies House, AI-assisted application assessment, and structured monitoring and reporting. Because fraud detection is integrated into the same platform as application management, signals are visible in context — an officer reviewing an application can see reputation check results, duplicate flags, and registration data alongside the application itself.

Plinth offers a free tier, making these tools available to smaller funders who may not have the budget for enterprise fraud detection systems but still need to demonstrate reasonable prevention procedures.

FAQs

Does AI fraud detection criminalise applicants?

No. AI generates signals for human review, not verdicts. A flag means an application warrants a closer look, not that the applicant has committed fraud. The reviewing officer decides what action to take, and the applicant should always have the opportunity to clarify discrepancies before any adverse decision is made.

Can small funders afford AI-powered fraud detection?

Yes. Platforms like Plinth include fraud detection capabilities within their standard grant management tools, with a free tier available. The cost of not detecting fraud — in financial losses, regulatory exposure, and reputational damage — typically far exceeds the cost of proportionate automated checks.

What about false positives — will legitimate applicants be unfairly flagged?

False positives are inevitable in any detection system, but they can be managed. Best practice includes never auto-rejecting on signals alone, giving applicants the chance to clarify, tracking false positive rates over time, and adjusting thresholds as the system learns. The goal is to reduce false positives progressively while maintaining detection of genuine risks.

Is it lawful to run automated checks on applicants under UK GDPR?

Yes, provided the checks are proportionate, have a legitimate interest basis, and applicants are informed that automated processing forms part of the assessment. Funders should document their lawful basis in their data protection impact assessment and include information about automated checks in their privacy notice.

What should a funder do when AI flags a potential fraud risk?

Follow the same process as any other concern: investigate proportionately, gather evidence, give the applicant an opportunity to respond, and document the outcome. The AI flag is the starting point, not the conclusion. If the investigation confirms a genuine concern, follow your organisation's fraud response policy and consider whether a serious incident report to the Charity Commission is required.

Does the "failure to prevent fraud" offence apply to funders?

The offence under the Economic Crime and Corporate Transparency Act 2023 applies to large organisations meeting specific size thresholds. However, the Charity Commission expects all charities to have reasonable fraud prevention procedures regardless of size. Having documented, proportionate checks — even basic automated ones — demonstrates good governance.

Can funders share fraud intelligence with each other?

Yes, where lawful and proportionate. UK GDPR permits sharing where there is a legitimate interest, but funders should use formal data-sharing agreements, share only what is necessary, and avoid circulating unverified suspicions. Sector initiatives like Prevent Charity Fraud provide frameworks for responsible collaboration.

How does AI fraud detection differ from traditional due diligence?

Traditional due diligence checks individual applicants against specific registers and requirements. AI-assisted fraud detection adds pattern analysis across the full dataset — identifying duplicates, anomalies, and connections that would be invisible when reviewing applications one at a time. The two approaches are complementary, not alternatives.

Recommended Next Pages


Last updated: February 2026