Just yesterday in parliament, DSIT secretary of state Peter Kyle told us: “Adopting AI across health, education and policing could boost productivity by almost £24bn a year. If we fail to do so, the benefits of AI could become the preserve of the privileged few.”
This recognition of AI’s potential to dramatically transform Whitehall, increasing efficiency and cutting costs, is welcome – other governments and the private sector have been reaping the benefits of using AI for some time.
And given the prime minister’s focus on “doing what is necessary to repair the public finances”, deploying AI at scale is an urgent component of making the numbers add up.
Yet as Reform’s new report, Getting the machine learning, shows, the new government are unlikely to achieve this without a radically different approach.
One interviewee we spoke to commented that AI is seen as a nice to have in government, and that officials “pilot AI in [their] team because it makes [them] cool, not because it’ll do anything”. From its inception in 2011, the Government Digital Service ushered in the last great wave of digital adoption in government. Regaining the momentum will require the same laser focus and central leadership.
To provide this, a Government Data and AI Service (GDAIS) should be established, within CDDO, with responsibility for driving AI adoption across the public sector. GDAIS would provide clear ownership and accountabilty for scaling AI – something which interviewees from inside and outside of Whitehall commented was sorely lacking. The recent move of public sector technology policy to DSIT presents a welcome opportunity to reset this.
Achieving a transformative impact with AI requires investment. The chancellor should use the upcoming Budget to announce a £1bn AI fund, overseen by GDAIS. Estimates place the potential savings from AI adoption across the public sector between £4.8bn and £40bn a year.
Crucially, the fund should only be used for six proven productivity-boosting uses – for example, process automation, business planning and automation – and the bulk of the money should be allocated to scale-up.
Civil servants consistently told us that too many AI projects were stuck at pilot stage, but never had the resources to ramp up to full delivery. One said: “Companies win a tender to pilot software, it works well, the buyer in government says ‘thank you very much, we’ll let you know when we next put out a call’, and they never hear back.” Ending this "pilotitus" by commiting to automatically fund projects which meet success criteria at pilot stage, rather than requiring long and complicated bidding processes, is the only way to secure those financial returns.
Success also rests on having the specialist technical skills to deliver AI projects – and the civil service simply doesn’t pay enough to hire the right people. One interviewee for the paper told us that “in some DDaT functions, the vacancy rates are 30-40%, and the majority of roles that are filled are filled by contractors rather than staff”, and in the 2023 the Central Digital and Data Office reported that 37% of government recruitment campaigns for digital roles were unsuccessful.
"As former permanent secretary John Kingman put the issue of civil service pay, 'there is only so far you can stretch the elastic'"
In the private sector, AI engineer salaries can be hundreds of thousands of pounds more than in the public sector. Nobody expects government to compete with some of the most profitable businesses in the world, but as former permanent secretary John Kingman put the issue of civil service pay, “there is only so far you can stretch the elastic.” It is imperative, therefore, that the government establishes a new pay framework for AI roles.
Better leadership, funding and specialist skills must be accompanied by clear guidance civil servants working on AI projects in departments can follow. AI guidance exists, but it is simultaneously too complicated – running to hundreds of pages in total – and lacking specifics. A simple set of principles would be more in keeping with the approaches taken by other governments, for example the eight principles which underpin the White House’s Executive Order on AI.
And at the same time, government must rethink its approach to risk. Rarely are the risks of not using AI or automation considered, despite the potential improvements to public services. Human decision-making is often flawed (as can be seen in asylum and benefit appeals), and one interviewee said: “Take the Horizon IT system. There were plenty of humans in the loop. It still failed terribly.” Holding AI to the standard of perfect performance is unrealistic, in the face of public services which often fail at present.
AI is not a silver bullet for all the challenges the state faces. But as public services continue to struggle with rising demand and growing backlogs, there is no excuse for failing to deploy and scale technologies proven to cut costs and improve outcomes.
Sean Eke is a researcher at the Reform think tank