Common Mistakes in Grant Management (and How to Avoid Them)
The most frequent grant management mistakes funders make, from unclear criteria to disproportionate reporting, with practical fixes backed by sector research.
Grant management in the UK involves over 14,000 grantmakers distributing more than 23 billion pounds each year (UKGrantmaking, 2024). With that volume, even small inefficiencies compound into serious problems: wasted applicant time, inconsistent decisions, compliance gaps, and eroded trust between funders and the organisations they exist to support.
The good news is that most grant management mistakes are predictable and fixable. They tend to cluster around a handful of recurring themes: unclear criteria, manual processes held together by email, disproportionate reporting requirements, weak audit trails, and a lack of meaningful feedback. The 2025 Foundation Practice Rating found that while 64 of the 100 assessed UK foundations scored an A on transparency, 22 scored a D overall, and 21 had no website at all. That gap between best and worst practice represents a real cost, borne disproportionately by the charities that can least afford it.
This guide walks through the most common mistakes funders make at each stage of the grant lifecycle, explains why they matter, and sets out practical steps to fix them. Whether you manage a small family foundation or a large institutional programme, the patterns and remedies are largely the same.
Why do the same mistakes keep recurring?
Grant management mistakes persist because grantmaking is, at its core, a relationship-dependent process that many organisations still run using general-purpose tools never designed for the task. A programme officer managing 200 applications in a shared inbox, scoring them on a spreadsheet, and chasing monitoring reports by email is not making a deliberate choice to be inefficient. They are working within the constraints of what they have been given.
The Association of Charitable Foundations (ACF) has argued that requirements should be stripped back and made proportional to the funding on offer. Yet many funders have inherited processes from predecessors and never revisited them. Forms grow longer as each new trustee adds a question. Reporting templates accumulate requirements from multiple funding rounds without ever being rationalised. The result is a system that serves nobody well.
Three structural factors make these mistakes self-reinforcing. First, funders rarely receive honest feedback from applicants who fear jeopardising future funding. Second, the people designing processes are seldom the people completing them. Third, without data on how long processes actually take or how consistently decisions are made, there is no evidence base for improvement.
Understanding these root causes matters because it shifts the response from blame to design. The question is not "who is at fault?" but "what system changes would prevent this?"
Mistake 1: Unclear or unmapped assessment criteria
The single most damaging mistake in grant management is asking applicants to provide information that does not map to how decisions are actually made. When application forms contain 30 questions but scoring rubrics only reference six of them, applicants waste time on irrelevant detail and assessors lack the structure needed for consistent decisions.
The IVAR Funding Experience Survey of over 1,200 charities found that the things that matter most to charities are "not wasting their time and giving them as much financial flexibility and stability as possible." Unclear criteria directly violate the first of these priorities.
How this manifests:
- Application questions that do not correspond to any scoring criterion
- Criteria published in vague language such as "demonstrable impact" without defining what evidence is acceptable
- Different assessors interpreting the same criterion differently because no calibration has taken place
- Eligibility requirements buried deep within guidance documents rather than stated upfront
How to fix it:
- Map every application question to a specific scoring criterion. If a question does not feed into a decision, remove it
- Publish plain-English eligibility criteria prominently, ideally with a self-assessment checklist so ineligible applicants can self-select out before investing time
- Calibrate assessors by scoring sample applications together before live assessment begins
- Use structured scoring forms with defined rating scales rather than open-text assessments
Tools like Plinth enforce this discipline by design, linking application form questions directly to assessment criteria and ensuring that every question has a purpose in the decision-making process.
Mistake 2: Email-based workflows with no decision trail
Email remains the default workflow tool for a surprising number of funders. Applications arrive in a shared inbox, get forwarded to assessors, and decisions are communicated in reply chains. The problem is not that email cannot handle the volume but that it creates no structured record of how and why decisions were made.
The Charity Commission has found that a significant proportion of compliance assessment cases involve concerns about financial governance (Charity Commission Annual Report 2023-24). When decisions live in email, reconstructing the rationale for a grant award or rejection becomes an exercise in archaeology rather than a straightforward audit query.
What goes wrong:
- No central record of which assessor reviewed which application
- Decision rationale scattered across email threads, meeting notes, and verbal conversations
- Version control problems where different people work from different versions of the same document
- No timestamp trail showing when decisions were made and by whom
What good practice looks like:
- A single system of record where every application, assessment, and decision is logged automatically
- Structured assessment forms that capture the rationale alongside the score
- Automated notifications that replace manual email chasing
- Role-based access so assessors see only the applications assigned to them
The table below compares email-based and system-based approaches across key dimensions:
| Dimension | Email-based process | Dedicated grant management system |
|---|---|---|
| Audit trail | Fragmented across inboxes | Automatic, timestamped, centralised |
| Assessor assignment | Manual forwarding | Bulk assignment with conflict checks |
| Decision logging | Unstructured (email text) | Structured forms with scoring rubrics |
| Status tracking | Manual (spreadsheet or memory) | Real-time dashboards and filters |
| Reporting to trustees | Compiled manually each meeting | Generated from live data |
| GDPR compliance | Difficult to enforce retention policies | Configurable data retention rules |
| Time per funding round | Typically 15-25 hours of admin | Typically 4-8 hours of admin |
Platforms such as Plinth provide automatic audit trails, bulk assessor assignment, and structured decision logging as standard features. Assessors can be invited to score applications through the platform, and every action is recorded without additional effort.
Mistake 3: Disproportionate reporting for small grants
Asking a community group that received 2,000 pounds for a summer play scheme to submit the same monitoring report as an organisation that received 200,000 pounds for a three-year programme is one of the most common and most resented practices in UK grantmaking. It is also one of the easiest to fix.
The ACF's Stronger Foundations initiative has been explicit that reporting requirements should be proportional to the funding on offer. IVAR's research reinforces this: charities reported that their "different demands and requirements are onerous and too often force charities to focus on 'what the funder wants' at the expense of 'what our organisation needs.'"
Signs your reporting is disproportionate:
- The same monitoring form is used regardless of grant size
- Grantees spend more time reporting on a grant than delivering the work it funds
- Reports ask for data that the funder never analyses or references in future decisions
- Grantees with multiple funders must report the same outcomes in different formats to each one
A proportionate approach:
| Grant size | Proportionate reporting | Frequency |
|---|---|---|
| Under 5,000 pounds | Brief update (250-500 words) plus one outcome measure | End of grant only |
| 5,000-25,000 pounds | Short structured report with 3-5 outcome measures | Six-monthly or end of grant |
| 25,000-100,000 pounds | Standard monitoring form with financial summary | Quarterly or six-monthly |
| Over 100,000 pounds | Full narrative and financial report with evidence | Quarterly with mid-term review |
The key principle is that reporting should be proportional to both the grant value and the risk involved. A 5,000-pound grant to an established charity with a strong track record warrants less scrutiny than the same amount to a newly registered organisation.
Plinth supports this by allowing funders to configure different monitoring form templates and schedules for different grant sizes, with automated reminders sent to grantees at the appropriate intervals.
Mistake 4: Failing to manage conflicts of interest systematically
Conflicts of interest in grantmaking are not inherently problematic. Trustees and assessors who work in the sector will inevitably have connections to applicant organisations. The mistake is not having conflicts but failing to manage them transparently.
The Charity Commission requires trustees to identify and manage conflicts of interest as part of their governance duties. A 2022 study in Trusts and Trustees (Oxford Academic) examined charity trustees' understanding of and confidence with their governance duties, including conflict management. The Charity Commission has opened formal inquiries into grant-making charities specifically over conflict of interest concerns, and in one case, former trustees of a grant-making charity repaid 650,000 pounds after a Charity Commission inquiry found undisclosed conflicts (Third Sector).
Common failures:
- No formal register of interests for assessors and decision-makers
- Verbal declarations at meetings that are not recorded in minutes
- Assessors reviewing applications from organisations they have a relationship with
- No process for recusal when a conflict is identified
Systematic management:
- Maintain a standing register of interests that is updated at least annually
- Require assessors to declare conflicts at the point of application assignment, not just at meetings
- Automatically prevent assessors from being assigned to applications where a declared conflict exists
- Record every declaration and recusal decision with a timestamp
Plinth includes conflict-of-interest declaration workflows within its assessor assignment process. When assessors are assigned to applications, the system prompts for conflict declarations, and all responses are logged as part of the audit trail. Bulk assessor assignment features also support random distribution to reduce the risk of systematic bias.
Mistake 5: Not giving feedback to unsuccessful applicants
Grant success rates in the UK vary enormously, from around 6 per cent at some large foundations to 47 per cent at others (Hinchilla, 2025). That means a large proportion of applicants are declined in every round. How funders handle those rejections shapes both their reputation and the quality of future applications.
The Foundation Practice Rating assesses 100 UK foundations annually on transparency, accountability, and diversity. In 2025, only 8 foundations achieved an A overall, while 22 scored a D (Foundation Practice Rating, 2025). A significant factor in those scores is whether funders provide meaningful feedback to unsuccessful applicants.
Over 140 UK funders have signed up to IVAR's eight commitments to open and trusting grantmaking, which include giving feedback, publishing success rates, and sharing reasons for rejection. Yet many funders still send generic decline letters with no explanation.
Why feedback matters:
- It helps applicants improve future applications, whether to the same funder or others
- It demonstrates respect for the time and effort applicants invested
- It builds trust and transparency in the sector
- It reduces repeat ineligible applications from organisations that did not understand why they were declined
Practical approaches:
- Use structured rejection reasons (such as "outside geographic focus," "budget exceeds available funding," or "insufficient evidence of need") rather than bespoke text for each applicant
- Where AI tools are available, use them to draft personalised feedback based on the specific scoring data, then have a human review before sending
- Publish aggregate data on success rates, common reasons for rejection, and what makes a strong application
Plinth includes AI-assisted feedback generation that drafts personalised rejection letters based on the assessor's scoring and notes. Funders can configure rejection reason templates categorised by type (such as eligibility, budget, or evidence gaps), and the AI uses these to produce constructive, specific feedback that a programme officer can review and send.
Mistake 6: Treating monitoring as a compliance exercise rather than a learning tool
Many funders collect monitoring data because they feel they should, not because they have a clear plan for what they will do with it. Reports arrive, get filed, and are never revisited. The information is not synthesised, not shared with trustees in a usable format, and not fed back into future programme design.
According to NCVO's Road Ahead 2025 report, the UK voluntary sector faces a "big squeeze" with rising costs and increasing demand. In that context, funders who are not learning from their monitoring data are missing an opportunity to direct resources more effectively.
Signs of compliance-only monitoring:
- Reports are collected but never analysed beyond confirming they were submitted
- Monitoring data is not presented to trustees or used in strategic discussions
- The same monitoring questions are asked year after year without reviewing whether they yield useful information
- Grantees report that funders never reference their monitoring reports in subsequent conversations
Monitoring as learning:
- Define upfront what decisions the monitoring data will inform: should the programme be continued, expanded, redesigned, or closed?
- Analyse monitoring data across the portfolio, not just grant by grant. What patterns emerge? Which types of interventions are delivering the strongest outcomes?
- Share anonymised insights back with grantees so they can learn from each other
- Review monitoring forms annually and remove questions that did not generate actionable information
Plinth's monitoring timeline feature allows funders to set up scheduled monitoring requests with different forms and frequencies for different grant sizes. Responses are collected in a structured format that can be analysed across the portfolio, and AI tools can generate summary reports for trustee meetings from the underlying data.
Mistake 7: Inconsistent assessment across reviewers
When multiple assessors score the same application and reach significantly different conclusions, it usually points to a process problem rather than a difference of professional judgement. Without calibration, scoring rubrics, and moderation, individual assessors inevitably apply their own interpretation of what "good" looks like.
Grant applications to many UK foundations surged 30 to 50 per cent in 2023-24, with some seeing applications double (NCVO, 2025). As volumes increase, the pressure to add more assessors grows, but each additional assessor introduces more variability unless the process is designed to manage it.
Common causes of inconsistency:
- Scoring rubrics that use subjective language without anchor descriptions (for example, "strong evidence" without defining what constitutes strong)
- No calibration exercise before assessment begins
- Assessors with different levels of sector knowledge assessing the same applications
- No moderation step to identify and resolve significant scoring discrepancies
Achieving consistency:
- Provide anchor descriptions for each point on the scoring scale. For example, a score of 5 might mean "provides quantitative evidence from a validated tool showing measurable change in the target population," while a score of 2 means "describes intended outcomes but provides no evidence of previous delivery"
- Run a calibration session where all assessors score the same two or three applications, then discuss and align on scoring standards
- Use automated flagging to identify applications where assessor scores diverge by more than a defined threshold, and route those to a moderator
- Track assessor scoring patterns over time to identify systematic leniency or harshness
Plinth supports structured assessment with customisable scoring forms, AI-generated assessment summaries that highlight key application strengths and weaknesses, and the ability to assign applications to multiple assessors with different roles (such as lead assessor and moderator). The platform's assessment processor aggregates scores across assessors and presents them alongside the application data for decision panels.
Mistake 8: Neglecting the grant agreement stage
Many funders invest significant effort in assessment and selection but treat the grant agreement as an afterthought, either using an overly generic template or, worse, making awards without a formal agreement at all. This creates problems when disputes arise, when monitoring expectations are unclear, or when the funder needs to demonstrate that appropriate terms were in place.
Common agreement failures:
- Using the same agreement template regardless of grant size, duration, or risk level
- Agreements that do not specify reporting requirements, timelines, or consequences of non-compliance
- No process for tracking whether agreements have been signed before funds are released
- Agreements drafted in legal language that grantees find impenetrable
Better practice:
- Maintain tiered agreement templates: a simple letter of award for small grants, a standard agreement for mid-range grants, and a full contract for large or complex awards
- Include clear, specific schedules for reporting dates, amounts, and payment milestones
- Use digital signing to create an auditable record of when agreements were executed
- Track agreement status so that no funds are released before a signed agreement is on file
Plinth includes a digital grant agreement workflow where funders and grantees can review and sign agreements within the platform. The system tracks agreement status and can be configured to prevent fund disbursement until signing is complete. Agreement templates can be customised per fund and generated with pre-populated details from the application.
Mistake 9: Operating without data on your own processes
The final and perhaps most fundamental mistake is managing grants without measuring how well your own processes are working. Without data on processing times, assessor workloads, applicant demographics, success rates, and grantee satisfaction, improvement is guesswork.
The Foundation Practice Rating demonstrates this gap: in 2025, only 8 of 100 assessed foundations scored an A overall, suggesting that the majority have room for improvement but may lack the data to know where to start (Foundation Practice Rating, 2025).
What to measure:
- Average time from application submission to decision notification
- Number of applications per assessor per round
- Success rates by applicant type, geography, and grant size
- Completion rates for monitoring reports
- Applicant satisfaction (surveyed anonymously)
- Time spent on administration versus strategic tasks per funding round
How to use the data:
- Set benchmarks and track trends over time, not just absolute numbers
- Share process data with trustees alongside portfolio performance data
- Use the data to justify investment in better tools or additional capacity
- Publish key metrics externally as part of transparency commitments
Plinth provides dashboards that track application volumes, processing times, assessor workloads, and award distributions in real time. The rejection analytics dashboard uses AI to analyse patterns in declined applications, identifying common reasons for rejection and suggesting improvements to guidance or eligibility criteria. This kind of self-reflective data is difficult to generate from spreadsheets and email but is available automatically in a purpose-built system.
Frequently asked questions
What is the single quickest win for improving grant management?
Map every application question to a scoring criterion and remove any question that does not directly inform a decision. This immediately reduces applicant burden and improves assessment consistency. Most funders can complete this exercise in a single working day.
How do we get honest feedback from applicants about our process?
Use anonymous surveys administered by a third party or through your grant management platform. The IVAR Funding Experience Survey methodology, which gathered responses from over 1,200 charities, demonstrates that anonymity is essential for honest feedback. Charities are reluctant to criticise funders they may depend on for future funding.
Do we need to replace our existing systems all at once?
No. Start with the highest-impact change, which is usually moving from email-based assessment to a structured system, and expand from there. Many funders begin by digitising their application and assessment process and add monitoring, agreements, and reporting over subsequent funding rounds. Plinth offers a free tier that allows funders to test the platform before committing.
How do we handle conflicts of interest when our trustees know everyone in the sector?
Conflicts of interest are expected in a well-connected sector. The key is systematic management: maintain a standing register, declare conflicts at the point of assignment, and ensure that conflicted individuals are recused from both assessment and decision-making for affected applications. Document every declaration and recusal decision.
What does proportionate reporting actually look like?
Proportionate reporting matches the depth and frequency of monitoring to the grant size and risk level. A 2,000-pound grant should require no more than a brief end-of-grant update. A 100,000-pound multi-year grant warrants quarterly structured reports. The key test is whether the funder will actually use every piece of information requested. If not, do not ask for it.
How can AI help with grant management without replacing human judgement?
AI is most useful for tasks that are time-consuming but relatively routine: summarising applications for panel meetings, drafting feedback letters from assessor scores, flagging scoring inconsistencies, and generating monitoring report summaries. The decision itself should remain with humans. AI handles the administrative preparation so that programme officers and trustees can focus on strategic judgement.
Is it worth investing in grant management software for a small foundation?
Yes, if you are currently managing more than 20 applications per year using email and spreadsheets. The time savings on administration, the improvement in audit trail quality, and the ability to provide structured feedback to applicants all justify the investment. Plinth's free tier makes this accessible even for foundations with limited budgets.
How often should we review our grant management processes?
At minimum, annually. Conduct a thorough review after each major funding round while the experience is fresh. Survey applicants and assessors, review your process data, and identify one or two specific improvements to implement before the next round. Continuous improvement is more effective than periodic overhauls.
Recommended next pages
- Grant Management Best Practices for Nonprofits and Foundations — Comprehensive best-practice recommendations for efficient, compliant grant programmes
- Audit Trails in Grant Software — Why digital decision records matter for accountability and assurance
- Reducing the Burden on Grant Applicants — How to streamline applications and reporting while maintaining data quality
- How to Give Better Feedback to Applicants — Practical approaches to constructive feedback, even for declined applications
- Managing Conflict of Interest in Grants — Policies and processes for transparent conflict management in grantmaking
Last updated: February 2026