'You have to use the technology yourself': How to build AI-ready governments and teams

Leaders should support their teams to take smart, judicious risks and try out new technologies themselves
Image: Adobe Stock

By Robyn Scott

31 Jul 2024

 

“If you want to get something funded these days," a senior civil servant recently told me, “just make sure you include AI somewhere.” I won’t mention the government in question. It doesn’t matter: I have heard versions of this sentiment from senior government leaders around the world. In rich countries and in developing countries, in right-wing and in left-wing administrations.  

Governments know AI is not going anywhere. And they know that any government standing still risks at best being a loser (as others seize the opportunities) and at worst being a victim (of the risks of a powerful technology unskillfully applied). Just last week, the new UK government launched an AI action plan as part of its missions-based approach to challenges, including economic growth. Science secretary Peter Kyle emphasised "putting AI at the heart" of improving public services to support mission delivery.  

But beyond the need to engage with this new technology, most government leaders are unsure where to start. And surprisingly few of them take the easiest and perhaps most critical first step – to actually use the technology themselves. This can be done quickly and safely and is essential for leadership in a world being rewired by artificial intelligence. AI’s powers can’t be fully understood in the abstract: you have to use the technology yourself. Yet I’ve lost count of the number of government leaders I know who’ve not once used a generative AI chatbot. I even spoke to one leader who’d launched a whole AI lab without ever using the technology – relying instead on looking over their teenager’s shoulder to see the tech at work.  

Last year Apolitical, the organisation I founded and run, launched a Government AI Campus to close the AI skills gap for leaders and their teams, and help them build what we call AI-ready governments. From our work so far (training more than 13,000 public officials around the world), much best practice is emerging. Five recommendations for leaders stand out because they apply very broadly – whether your government is AI cautious or an AI cheerleader, whether you’re running a whole civil service, a ministry, or a smaller team.  

"AI’s powers can’t be fully understood in the abstract: you have to use the technology yourself"

Leaders first

To harness AI, which most government leaders know they need to do, they must use it themselves. Better users are better leaders, better buyers, and better regulators. Our polling shows that 59% of all public servants have experimented with AI in their work, but this is only about 10-20% at the leadership level. Leaders should complement using the technology with more formal training, and upskill on emerging best practice and the frameworks for thinking about AI.  

Provide clear, adaptive guidance

In our global polling, while around 60% of public servants have experimented with generative AI technologies, only 35% have received any guidance. As a result, cautious public servants are not using the technology where they could, and the more impatient are using it where they should not. Governments are spending comparably more time on high-level regulations, and less on guidance that cascades from those lofty legislative levels into frameworks that can be applied at the level of teams and individuals. When the guidance does come, it is often the one-size-fits-all sort. And this usually means it’s designed to protect against applications that propose the greatest risks, which constrains innovation around low-risk applications.  

Design for experimentation and evolution

Guidance needs to be complemented by a plan for experimentation. The field is evolving so quickly that planning and application need to become part of one interactive and continuous process. These plans can also involve sandboxes – including regulatory sandboxes – for experimentation with high-risk but potentially high-reward applications of the technology. Leaders should support their teams to take smart, judicious risks.  

Build AI-capable teams

In our latest polling, only 15% of public servants globally report having received any training in AI. This is a gap that must be closed urgently in order to seize AI’s opportunities and manage its risks. This may involve hiring experts, but existing teams must also be upskilled – fast and at scale. Skilful government teams are particularly critical for successful partnerships with private sector vendors in a world in which there’s much snake oil. Importantly, these skills must include a foundation of "bread and butter" digital and data skills where these don’t already exist. AI only works when digital and data work. 

Consider and consult the citizen

The success of AI in government will ultimately depend on whether it helps deliver better public services for people and more efficient governments for taxpayers. And AI’s application at scale has to be done so as to avoid gross costs to citizens in the form of failed AI applications. A sobering (pre-ChatGPT) example comes from the Netherlands where faulty AI decision-making systems resulted in a huge failure in welfare payments and a many-year scandal that nearly brought down a government. Australia’s consultative approach to how AI should be used is one example of how citizens can be brought into the process early. In general, citizens' assemblies present a powerful model for grappling with the evolving risks and opportunities of the technology. 

Robyn Scott is CEO and co-founder of Apolitical. The Apolitical report on Building the AI-Ready Government: An Essential Action Plan for Leaders is out on 31 July

Tags

ChatGPT
Share this page