The Elephant in the R&D Room: What AI Does to Technological Uncertainty
Right now, companies across the UK are filing R&D tax relief claims for work carried out in 2024 and 2025. Those claims will be assessed against the technological landscape that existed when the work was done.
But work being done now, in 2026, sits in a fundamentally different reality — one where AI can often match or exceed a competent professional. The legislation and guidance have not caught up.
The Definition That's About to Implode
The statutory test for R&D qualifying activities requires that a project seeks to achieve an advance in science or technology through the resolution of scientific or technological uncertainty.
Paragraph 13 of the DSIT Guidelines (March 2024) states:
"Scientific or technological uncertainty exists when knowledge of whether something is scientifically possible or technologically feasible, or how to achieve it in practice, is not readily available or deducible by a competent professional working in the field."
The key phrase is:
"not readily available or deducible by a competent professional"
In practice, this has meant:
- If a competent professional could work out the answer from existing knowledge, it is not R&D.
- It is simply competent engineering, even if it is difficult, time‑consuming, or commercially risky.
This test has underpinned every R&D claim since relief was introduced. AI is now straining it to breaking point.
What Happens When AI Becomes the Competent Professional?
In February 2026, you can describe a detailed technical problem to a frontier AI model and receive an answer that:
- Is structured and technically coherent
- Synthesises published literature, standards, and case studies
- Often matches or exceeds what a human competent professional could produce
A Concrete Example
2023 scenario:
A software company develops a novel approach to real‑time data synchronisation across distributed systems with sub‑millisecond latency. The specific combination of techniques is not documented. A competent professional, relying on published sources and their own experience, cannot readily deduce the solution.
Result: there is a credible argument that technological uncertainty exists. The work may qualify as R&D.
2026 scenario:
Describe the same problem to a state‑of‑the‑art AI model:
- You receive a detailed architecture
- You get code snippets, trade‑off analysis, and references
- You get it in under 30 seconds
Was that knowledge "readily available" in 2023? Arguably not.
Is it readily available in 2026? For many such problems, yes. It is available to anyone with an internet connection and a prompt.
This is the collision point between AI capability and the statutory test.
The Questions HMRC Hasn't Answered
1. Is AI consultation now a baseline expectation?
If a company claims technological uncertainty, a natural HMRC question becomes:
"Did you ask an AI first?"
If a well‑prompted AI query could have produced a viable solution, can you still say the knowledge was not readily available?
The DSIT Guidelines focus on availability of knowledge. An AI response that arrives in seconds is, by any common‑sense reading, readily available.
2. What is a "competent professional" in 2026?
The DSIT Guidelines and HMRC’s CIRD manual (CIRD81900 onwards) define a competent professional by reference to:
- Qualifications
- Experience
- Knowledge of the field
But in 2026, a competent professional typically:
- Uses AI tools for literature review, design options, and calculations
- Leverages AI to explore solution spaces that would previously have taken weeks
If the augmented competent professional — human plus AI — can deduce the answer, does uncertainty still exist?
If HMRC (or the Tribunal) adopt that view, the technological baseline shifts sharply upwards.
3. Does AI‑generated knowledge count as "publicly available"?
AI does not create new scientific truth in this context. It:
- Ingests publicly available data
- Learns statistical relationships
- Synthesises plausible solutions
If an AI can infer a solution, one argument is:
- The solution was always deducible from public information
- We simply lacked the tools (or time) to perform the deduction
Under that logic, many problems we previously treated as uncertain may, with hindsight, never have met the objective test.
The Case Law Problem
The First‑tier Tribunal has consistently treated technological uncertainty as an objective test.
- It does not matter that your engineers did not know the answer.
- It matters whether a competent professional could have known or deduced it.
Quinn (London) Limited v HMRC [2024] UKFTT 00117 (TC)
In Quinn, the Tribunal held that the work did not qualify because the solutions were within the existing competence of professionals in the field. The company’s own difficulties were irrelevant once it was shown that the knowledge existed.
Transposed into 2026:
- If an AI can generate a working solution, HMRC may argue that a competent professional, properly equipped, could have done the same.
- The fact that the claimant did not use AI, or did not know how to prompt it effectively, may be treated as a subjective limitation — and therefore irrelevant.
Gripple Limited v HMRC [2010] UKFTT 232 (TC)
Gripple emphasised that uncertainty must concern whether something is achievable in a general sense, not whether a particular company can achieve it.
AI significantly expands what is achievable "in a general sense" because:
- It lowers the barrier to synthesising complex solutions
- It compresses years of literature review into minutes
The more capable AI becomes, the narrower the space where genuine technological uncertainty can be said to exist.
What This Means for Your 2026 R&D Projects
The practical implication is stark:
R&D work carried out in 2026 will, sooner or later, be assessed against a technological baseline that includes AI capabilities.
R&D tax relief is not dead, but the bar for uncertainty is rising.
What Still Likely Qualifies
- Genuinely frontier research
- New materials, novel physical phenomena, or mechanisms with little or no published data
- Areas where AI has limited or no relevant training data
- Physical implementation uncertainty
- AI can propose a design, but:
- Manufacturing tolerances
- Real‑world operating conditions
- Safety, reliability, and compliance
- still require experimental work to determine whether the solution actually works in practice.
- Complex, context‑specific integration challenges
- Highly bespoke legacy environments
- Proprietary systems and constraints that AI cannot fully model
- Interactions that can only be understood through testing in your specific environment
- R&D at the boundaries of AI itself
- Developing new AI architectures, training methods, or safety mechanisms
- Pushing beyond the capabilities of existing models
What Is Increasingly Vulnerable to Challenge
- "Novel algorithms" in well‑trodden domains
- If a frontier AI can generate a comparable or better algorithm on demand, HMRC may argue that the solution was deducible.
- Undocumented combinations of known techniques
- The fact that a specific combination is not written up in a paper may no longer be enough.
- If the combination flows naturally from established principles that AI can synthesise, uncertainty is harder to argue.
- Claims based on internal difficulty rather than objective uncertainty
- "Our engineers couldn’t find a solution" will carry less weight if:
- A competent professional using AI tools could have found one, and
- HMRC or an expert witness can demonstrate this.
HMRC's Silence on AI
HMRC updated the DSIT Guidelines in March 2024 and revised the CIRD manual (including CIRD81900 onwards). Those documents:
- Discuss competent professionals
- Discuss technological baselines and information availability
- Refer to industry standards, journals, and other public sources
They do not address AI‑generated knowledge.
In Thomas Elsbury v The Information Commissioner, HMRC declined to confirm whether they use AI in enquiries. More broadly, they have not:
- Stated whether they expect claimants to use AI tools
- Explained how AI affects the assessment of "readily available" knowledge
- Clarified whether AI‑synthesised solutions will be treated as evidence that uncertainty did not exist
This silence leaves claimants exposed to retrospective reinterpretation.
What Needs to Happen
The R&D framework needs explicit, AI‑aware clarification. At a minimum:
1. Guidance on AI Consultation
HMRC and DSIT should state clearly:
- Whether consulting AI tools is now part of reasonable due diligence for a competent professional
- Whether failure to use AI can undermine a claim
- Whether claimants are expected to use state‑of‑the‑art models, or merely commercially reasonable tools
2. A Temporal Anchor for "Readily Available"
Guidance should confirm that:
- The test of "readily available or deducible" is applied at the time the work was undertaken
- Later AI capabilities cannot retrospectively erase uncertainty that genuinely existed at project inception
3. Recognition That AI Synthesis ≠ Human Capability
Policy needs to distinguish between:
- What an AI can synthesise when given a perfectly framed prompt, and
- What a competent professional, acting reasonably, would have known to ask or explore
Often, the real uncertainty lies in:
- How to frame the problem
- Which constraints matter
- Which trade‑offs are acceptable
An AI’s ability to output a solution once perfectly prompted does not automatically mean that solution was practically deducible by humans at the time.
4. Safe Harbours for Documented Uncertainty
To provide certainty for claimants, HMRC could introduce safe harbours where:
- The company can show contemporaneous documentation that:
- Identifies the technological uncertainties at project start
- Records the state of knowledge and tools (including AI) consulted
- Logs why available tools, including AI, did not resolve the uncertainty
Where such evidence exists, later AI advances should not, by themselves, invalidate the claim.
What You Should Do for 2026‑Onwards Projects
If you are planning or conducting R&D now, with claims to be filed in 2027 or 2028, you should assume that AI will be part of the objective baseline against which HMRC assesses uncertainty.
Practical Steps
- Explicitly consider AI at project inception
- Record which AI tools you used
- Capture the prompts and outputs
- Note where AI failed to provide a workable or reliable solution
- Document the technological baseline
- Identify relevant standards, papers, and known techniques
- Explain why these did not resolve the uncertainty
- Show that you went beyond a superficial search, including AI‑assisted searches where appropriate
- Separate conceptual and implementation uncertainty
- If AI can suggest a concept, focus your claim on:
- Whether it was technically feasible to implement
- Whether performance, reliability, or safety targets could be met in practice
- Maintain contemporaneous records
- Design logs, experiment notes, test results
- Internal technical reviews and decision papers
- Evidence of failed approaches, including AI‑generated ones
- Align your "competent professional" with reality
- Ensure your internal competent professionals are, in fact, using modern tools
- Be prepared to justify that their approach reflects current professional standards in 2026, not 2015
The Bottom Line
Claims for 2024–2025 work are, in many cases, defensible on the basis that AI capabilities were more limited and less embedded in professional practice.
For work undertaken in 2026 and beyond:
- Assume AI is part of the objective technological environment
- Expect HMRC and the Tribunals, sooner or later, to treat AI‑accessible solutions as evidence that knowledge was "readily available or deducible"
- Do not rely on "we didn’t know AI could do that" as a defence
If you want your future R&D claims to stand up, you will need to:
- Acknowledge the elephant in the room
- Show that you used AI intelligently
- Demonstrate that, even with AI, genuine scientific or technological uncertainty remained
Legislative and case references:
- Corporation Tax Act 2009 (CTA 2009) s.1042 – definition of R&D for tax purposes
- DSIT Guidelines for R&D Tax Credits (March 2024), paragraph 13 – definition of scientific or technological uncertainty
- HMRC Corporate Intangibles Research and Development Manual, CIRD81900 onwards – competent professional test, technological baseline
- Quinn (London) Limited v HMRC [2024] UKFTT 00117 (TC)
- Gripple Limited v HMRC [2010] UKFTT 232 (TC)
- Thomas Elsbury v The Information Commissioner – Find Case Law, The National Archives
Need Help with Your R&D Claim?
Get expert advice from Innovation Plus. Our team is ready to help you maximise your R&D tax relief.
Get Free AssessmentReady to Claim Your R&D Tax Relief?
Get a free, no-obligation assessment of your potential R&D tax credit claim. Our specialists will review your projects and identify qualifying activities.