CDDO publishes new generative AI guidance for civil servants

Expanded document suggests no-go areas for tech and doubles the number of “principles” to guide officials
Photo: Adobe Stock

By Jim Dunton

19 Jan 2024

The Central Digital and Data Office has unveiled updated cross-civil-service guidance on the use of generative AI – including a list of no-go areas where deployment should be avoided because of current limitations with the technology.

The Generative AI framework for HM Government expands on initial guidance on generative AI published by the Cabinet Office body in July last year, and includes “practical considerations” for officials planning or developing generative AI solutions.

It also includes 10 principles that civil servants should follow to “guide the safe, responsible and effective use of generative AI in government organisations”. The figure is double the number of principles set out in March 2023’s cross-sector A Pro-Innovation Approach to AI Regulation white paper, published by the Department for Science, Innovation and Technology and the Office for Artificial Intelligence.

The principles include a commitment to understand and actively learn about generative AI and its limitations; ensure there is “meaningful human control” at appropriate stages; and to work with commercial colleagues from the start.

The 74-page guidance repeatedly stresses the limitations of so-called large-language models of AI, of which ChatGPT is the most famous example. It lists several areas of government work that “should be avoided” because of current limitations on technology.

In particular it identifies “fully automated decision-making”, including any significant decisions involving health and safety; decisions that could harm people’s fundamental rights or the environment; and situations in which extremely rapid responses are required.

Other areas to avoid include situations in which there is limited data for generative AI to work with. “The performance of generative AI depends on large quantities of training data,” the guidance states. “Systems that have been trained on limited quantities of data, for example in specialist areas using legal or medical terminology, may produce skewed or inaccurate results.”

The guidance urges civil servants to be “led by business needs and user needs” in thinking about the deployment of generative AI, and suggests attempting to focus on use cases that can only be solved by generative AI or those in which generative AI offers “significant advantages” above existing methods.

It says the most promising use cases at present are likely to be supporting digital enquiries, analysing correspondence or voice calls, producing first drafts of documents and correspondence, and aiding software development.

Government chief technology officer David Knott said in his foreword to the guidance that generative AI had “the potential to unlock significant productivity benefits” for departments.

“This framework aims to help readers understand generative AI, to guide anyone building generative AI solutions, and, most importantly, to lay out what must be taken into account to use generative AI safely and responsibly,” he said.

“This framework differs from other technology guidance we have produced: it is necessarily incomplete and dynamic. It is incomplete because the field of generative AI is developing rapidly and best practice in many areas has not yet emerged. It is dynamic because we will update it frequently as we learn more from the experience of using generative AI across government, industry and society.”

Knott added that expected updates would include “deeper dive sections to share patterns, techniques and emerging best practice”. He cited prompt engineering as one example.

Read the most recent articles written by Jim Dunton - Cabinet Office opens call for 'test-and-learn' secondments

Share this page