By Tevye Markson

07 Mar 2022

The Declaration on Government Reform was trumpeted as a plan to rewire and renew government. But how well has government delivered on its promise to improve the way government evaluates policy?

 

Trumpeted as a plan to rewire and renew government, the Declaration on Government Reform was agreed in June 2021 after the first-ever meeting of all departmental permanent secretaries and cabinet ministers. The joint agreement between government’s political and civil service leaders heralded – according to then-Cabinet Office minister Michael Gove – “a unity of resolve that we need to see these changes through”.The document included 30 actions to be completed by the end of 2021, grouped under three headings: people, partnership, performance.

In this new mini-series, CSW will be exploring progress against actions set out in the DGR, starting with work to improve how government evaluates policy. Next month, we’ll be assessing more actions under the performance and partnership headings and in April we’ll consider what is being done to get the right people with the right skills into government, and ensure they are properly supported.

The government’s 2021 reform plan set out the ambition for civil servants to be “creative and imaginative in problem-solving and policy formulation” as well as “rigorous in welcoming evaluation and scrutiny.”

But government is not good at evaluating what it does – a recent National Audit Office report found that just 8% of spending on major government projects is evaluated robustly, while 64% is not evaluated at all.

And that’s just major projects – there are many other policy areas where poor or non-existent review processes mean that government isn’t learning from success and failures.

Why does government struggle to evaluate policies, and what is it doing to improve this? The Declaration on Government Reform included a specific action to create a new Evaluation Task Force alongside a revamped delivery unit to monitor progress.

CSW spoke to the Institute for Government to find out more and heard from officials who are at the centre of these attempts to improve evaluation.

Why evaluation is important

Evaluation allows the government to understand if projects are working and to demonstrate accountability for its use of public money.

It can help policymakers decide whether initiatives should be continued, expanded, improved, targeted in different ways, or stopped altogether.

Central government guidance makes it clear that departments should evaluate projects comprehensively.

What makes good evaluation?

Finding out what is and what isn’t working is the most important thing, Catherine Hutchinson, head of the new Evaluation Task Force, told a recent CSW webinar – held in partnership with Oracle to discuss making outcome delivery plans a success.

Jemma Fisher, deputy director, in the Cabinet Office’s strategy, planning and performance team added: “Sometimes people feel if the evaluation shows that something wasn’t effective, that isn’t as good as [evaluation] showing it is [effective]. In actual fact, a finding that shows that you should stop or modify is as good, because you can reroute public money and spend it in a much more effective way.”

Sian Jones, NAO director of value for money for public service, said seeing the government abandon more projects would be a good sign and she hopes the new task force will lead to “much more honest discussion about reprioritisation”.

“That’s a much more healthy place to be than trying to deliver despite unsuccessful piloting or projects and programmes,” she said.

Evaluation struggles and a renewed focus on outcomes

An NAO report on evaluation, released in December, found that the government has made progress towards using evidence better and improving value for money, but its use of evaluation “continues to be variable and inconsistent” and it has been “slow to address known barriers to improvement”.

Another longstanding criticism is that government has, for far too long, focused too much on inputs (such as the number of GPs) and outputs (the number of GP appointments available) and not placed enough of a focus on real-world outcomes (patient satisfaction).

There was previously too much focus on the money going into the system and not enough on “what we get as a result of the money”, Fran Sims, deputy director of the public value unit in the Treasury, told the CSW webinar.

Since 2017, the government has taken a series of steps to prioritise outcomes in the way it makes spending decisions, analyses performance and evaluates actions.

First came the creation of the public value framework, spearheaded by Tony Blair’s onetime Delivery Unit chief and “deliverology” expert Sir Michael Barber, which set out how departments should maximise real-world outcomes.

The next big step was the introduction of priority outcomes in 2020, which asked departments bidding in spending reviews to explain the real-world impact the spending would have.

Outcome delivery plans were then introduced last year, replacing the old single departmental plans. ODPs take the agreed outcomes, which are updated in each spending review, and ask departments to set out how they will deliver them.

Throughout the year, the Cabinet Office is then able to monitor the extent to which real-world outcomes are being achieved.

ODPs: How good are they?

Describing the aim of ODPs, Fisher said: “Ultimately, the government spends billions of pounds, if not trillions of pounds, every year. And there is a common saying that our goals become wishes without a robust plan. And so, in essence, ODPs are exactly that: they are just a plan for how departments intend to spend taxpayers’ money, and give us a way of driving transparency and accountability across government.”

She later made the point that the government has “put much more of an onus on more frequent and transparent sharing of information between departments”, calling this “quite a big shift and culture change from Single Departmental Plans”.

The IfG’s latest Whitehall Monitor report says ODPs are an improvement on SDPs.

SDPs laid out detailed lists of policies and actions that departments planned to take, but often failed to connect these to the outcomes they wished to achieve, or give metrics by which success could be measured, the IfG says.

The new ODPs place outcomes and metrics front and centre, often listing multiple metrics for each outcome.

The IfG welcomes the focus on real-world impact, but says outcomes are “frequently too vague or high-level, and often disconnected from the metrics”.

The think tank says the new framework is “promising but does not yet amount to a full step forward in government performance management or transparency”.

It adds that the government needs to set more specific targets for the metrics it is using to measure outcomes, share more information about how ODPs will be used to track performance, and commit to sharing more performance data with the public.

“I think the shift towards an outcome-focused approach is the right one,” Rhys Clyne, a senior researcher at the IfG, says.

“That will hopefully help to address the problems that government has long been criticised for – namely losing focus on real-world impact. But the job isn’t finished.”
Clyne says there are also inconsistencies between the different departments.

“They don’t all use metrics in the same consistent way. Some of them use them more than others, for example.”

One example of this, highlighted by the IfG, is the DWP’s aim to “improve opportunities for all through work, including groups that are currently underrepresented in the workforce”, which is measured solely through the disability employment-rate gap.

As well as outcomes for each department, the government has also set cross-cutting outcomes, where several departments are responsible for achieving an outcome, such as levelling up. 

Clyne says the use of cross-cutting outcomes is an improvement on the previous system, but there are inconsistencies in how departments describe and recognise their role in achieving them and there is a long way to go for the government to “shore up the coordination” of crossing-cutting outcomes.

Training departments and monitoring evaluation

The Cabinet Office set out its plans to improve evaluation in last year’s Declaration on Government Reform, which included aims to “set up an Evaluation Task Force to ensure consistent high quality impact evaluation and transparency and a refreshed delivery unit to drive progress on the government’s headline priorities” by the end of 2021.

The creation of the task force had already been achieved at this point, having been set up in spring 2021 and coming in alongside a £15m evaluation accelerator fund to improve the government’s understanding of the impact of its activity in key policy areas such as net zero and levelling up.

The ETF offers training to departments, showing them good evaluation methodology and giving them templates to help them evaluate their own work, to make departments better at evaluation themselves, rather than relying on the centre of government.

It also assesses departments’ evaluation performance and advises the Treasury on how to best target expenditure to have the biggest impact.

The task force was integral in informing the 2021 Spending Review, including detailed involvement in bids and departmental settlements, CSW understands.

It has also been heavily involved in supporting departments to develop new policies and ensuring that potential impacts are comprehensively thought through and robust evaluation strategies are in place to understand the impact policies are having.

The task force is also producing a strategy that will reflect on past challenges and future opportunities for the What Works Network, which was established in 2013 and has informed many policy decisions.

The refreshed No.10 Delivery Unit is currently focused on five policy priorities: levelling up; net zero; education, jobs and skills; health; and crime and justice.

Clyne says the unit’s evaluation role could include tracking the delivery of key priority outcomes in a select few areas and then helping departments to make progress on them.

Impact has always been the goal of government policies and in recent decades, numerous attempts have been made to measure it. If these rushed new initiatives seem somewhat like reinventing the wheel, then maybe they are. But perhaps thinking in government has finally turned a corner. Watch this space. 

This story was published in CSW's February 2022 issue, which you can read in full here

Read the most recent articles written by Tevye Markson - GOV.UK Forms platform to be rolled out across government

Share this page