MPs press DWP to reveal impact of AI on benefit claims

PAC disputes department's suggestion that publishing data on the impact of algorithms would pose a fraud risk
Photo: David Burton/Alamy Stock Photo

By Sam Trendall

10 Apr 2024

MPs have pushed the Department for Work and Pensions for data on the impact on benefit claimants of the use of artificial intelligence – and disputed the department’s claim that releasing such information could help fraudsters.

In recent years, the department had grown its use of algorithms and machine learning in the processing of benefit claimants – with a particular focus on using technology to help detect potential fraud. Civil society groups have criticised the opacity of these deployments to date and, in its report on DWP’s annual accounts for the 2023 fiscal year, the Public Accounts Committee said that “there are legitimate concerns about the level of transparency around DWP’s use of these tools and the potential impact on claimants who are vulnerable or from protected groups”.

PublicTechnology.net logoMPs on the committee recommended that, in future annual reports, the department should “consider explicitly the impact of data analytics and machine learning on legitimate claims being delayed or reduced, the number of people affected, and whether this is affecting specific groups of people”.

The MPs’ report, which was released in December, added: “DWP has not made it clear to the public how many of the millions of Universal Credit advances claims have been subject to review by an algorithm. Nor has it yet made any assessment of the impact of data analytics on protected groups and vulnerable claimants; though we acknowledge it has recently committed to provide such an assessment in next year’s annual report.

"Although DWP has internal governance arrangements over its use of machine learning and performs some ongoing analysis of bias, the results so far have been largely inconclusive.”

As part of a Treasury Minute published in February, DWP said in its response that it disagreed with the committee’s recommendation to release detail on instances of legitimate benefit claims being impacted by the use of algorithms.  The department did reiterate an earlier commitment to provide some additional “annual assessments… [of] impacts on customer service.

But the response added: “While the department is committed to providing information as set out, it must not compromise its ability to tackle fraud and error by revealing details about its models that could be exploited. On that basis, the department disagrees with the committee’s recommendation detailing specific metrics for publication.”

PAC chair and Labour MP Meg Hillier has, in turn, responded to the department and, in a letter to permanent secretary Peter Schofield, disputed the suggestion that providing the requested data would present an additional fraud risk.

“You disagreed with the committee’s recommendation that you report some key metrics about the impact of data analytics and machine learning on claimants,” she added.

“You said that doing [so] would risk compromising your ability to tackle fraud by revealing details about your techniques that could be exploited by fraudsters. I am not convinced that this is inevitable. For example, you could publish high-level figures, such the numbers of people with delayed or reduced claims, without revealing any detail about how your machine learning models deal with specific types of claimant.”

The DWP chief has been asked to respond to the committee by 19 April.

Read the most recent articles written by Sam Trendall - MoD to investigate potential for AI to improve productivity

Share this page