When the UK government signed a memorandum of understanding with OpenAI, ministers hailed it as a landmark partnership that would harness artificial intelligence to “address society’s greatest challenges.” It made headlines. It generated press releases. It signalled that Britain was serious about AI in government.
Eight months later, a Freedom of Information request revealed the inconvenient truth: the government had not undertaken a single trial under the memorandum.
Not one.
What the FoI Actually Found
The FoI request was filed by Tarek Nseir, founder of Valliance, an AI consultancy. He asked the Department for Science, Innovation and Technology (DSIT) for information about any trials conducted under the OpenAI memorandum, which had explicitly stated that the company would work with civil servants to “identify opportunities for how advanced AI models can be deployed throughout government and the private sector.”
DSIT’s response was stark: it held none of this information and had “not undertaken any trials under the memorandum of understanding with OpenAI.”
When the Guardian pressed DSIT for comment, the department pointed to a separate agreement under which the Ministry of Justice last October enabled civil servants to use ChatGPT “with an option for UK-based data storage.” Nseir’s response to that was precise: “We use PowerPoint — that doesn’t mean we have a strategic relationship with Microsoft. If this was the intent of the MoU then our government is not taking the impact of AI on our economy seriously.”
The Gap Between Announcement and Action
This is a pattern, not a one-off. The UK government has signed similar high-profile AI memoranda with Anthropic, Nvidia, and others. The language in each of them is expansive — transforming “how people live, learn, work, and access public services,” creating “a powerful tool to drive productivity, accelerate discovery, and create opportunity.”
The gap between that language and actual deployment is significant, and it matters for reasons beyond political embarrassment. Every month that government AI deployment stalls is a month where the productivity gains AI could deliver to public services — faster case processing, better resource allocation, improved service delivery — go unrealised. In a period of fiscal constraint and service pressure, that is not an abstract cost.
Matt Davies, economic and social policy lead at the Ada Lovelace Institute, identified a deeper structural problem: “Voluntary partnerships with big AI companies don’t follow the usual procurement rules, raising real questions about accountability and scrutiny. The memorandum with OpenAI doesn’t clearly explain how progress will be measured or how it will deliver public benefit, and the risks of lock-in — becoming dependent on a company’s product and services — aren’t addressed anywhere.”
He added that in polling, 84% of the public said they were concerned about the government putting the AI sector’s interests ahead of protecting the public.
Stargate UK: The Bigger Story
The FoI revelation sits alongside a separate Guardian investigation finding that Nscale — which promised to build the UK’s largest AI supercomputer by end of 2026 as part of the Stargate UK initiative — will almost certainly not complete the project on time, and has publicly misrepresented its progress on the site.
Nscale is also supposed to collaborate with OpenAI on deploying 8,000 Nvidia chips to sites across the UK, a project previously suggested to be happening “this quarter.” When the Guardian contacted OpenAI about progress on this deployment, the company said it had “nothing to share.”
Taken together: the flagship AI government partnership has no trials. The flagship AI infrastructure project is behind schedule and misrepresenting its progress. And the government’s response to all of this is to announce a £500 million Sovereign AI Fund launching April 16.
The fund may well be a good idea. But its credibility depends entirely on execution — the same thing the OpenAI MoU promised and has not yet delivered.
What This Means in Practice
There is a legitimate version of UK government AI strategy that works: careful, staged deployment with proper accountability frameworks, protecting public data, avoiding vendor lock-in, building capability that serves citizens rather than shareholders. The Ada Lovelace Institute’s concerns are not anti-AI — they are pro-governance.
The problem is that “careful and staged” has curdled into “announce loudly, act slowly.” The MoU with OpenAI was not careful and staged — it was a press opportunity dressed as a strategic commitment. And the public, the tech sector, and Britain’s international competitors are all watching to see whether the current government’s AI ambitions are real or rhetorical.
DSIT insists the work is “active, ongoing and focused on delivering real results.” OpenAI says its UK activities extend beyond the FoI’s scope. Both statements may be true. But eight months after a headline-generating partnership announcement, the evidence of results is thin.
The April 16 Sovereign AI Fund launch will be the next test. What happens in the months after the announcement will be the real one.
Sources
- The Guardian: UK government yet to trial OpenAI tech months after signing partnership
- The Guardian: Datacentre boom — is the UK AI bubble about to burst?
- Gov.uk: Memorandum of Understanding between UK and Anthropic
- Sifted: How many government AI initiatives is too many?
- Think Digital Partners: UK public sector cautious on AI productivity gains despite investment
