Some frontline professionals reluctant to use AI tools, research finds

Ada Lovelace Institute paper also points to the need for proper evaluation of AI interventions
Photo: Adobe Stock

By Susan Allott

14 Mar 2025

 

The Ada Lovelace Institute has warned of a lack of trust in AI in the public sector, in a briefing paper published today. 

The briefing, which brings together the findings of more than 30 reports and research publications across six years of research into the use of AI in the public sector, says that AI will only be accepted by the public sector and the wider community if it interacts with “existing social systems, values and trust”.

The paper sets out a number of concerns which could “ultimately hinder adoption and use”of AI tools, undermining goals to transform public services. 

Hesitance on the frontline

In a key finding, the briefing paper says that some frontline professionals have been reluctant to use AI tools “where they do not see them as defensible and legitimate”.

The paper says effective use of AI requires "public licence", and "moving out of step with public comfort can undermine the ability for the public sector to effectively use AI".

"Public backlash has led to people withdrawing from data sharing or hampered the use of existing tools" and led to public servants abandoning AI projects, it warns. 

After teenagers protested the A-level algorithm used during the pandemic to assign results, public servants told the institute that a number of other AI projects were shelved as a consequence of this backlash.

The paper also found a “lack of evidence for the effectiveness and impact of AI tools, even from a purely technical standpoint,” which the report describes as “surprising”.

To address this, the briefing points to the need for proper evaluation of AI interventions and says this is "crucial to determining their performance and value compared to existing manual or traditional methods".

Data deficiency

The report highlights the fact that “AI is only as good as the data underpinning it”.

Data is often partial and “can encode existing social inequalities”, it says. Without relevant data which is sufficiently complete, AI cannot solve the problems which it is being deployed for, the institute added.

Those using AI need to “mitigate limitations in data collection”, the report suggests. These limitations include “the underrepresentation of digitally excluded and marginalised groups” and “structural inconsistencies in how data is recorded, collected and understood”.

Procurement problems

The institute identified a lack of knowledge across government and public services of where AI is being used. “The public sector’s understanding of its own AI usage is severely lacking, which hinders both democratic accountability and internal knowledge sharing," it says.

To address this, the report calls for “transparency in procurement processes”, pointing out that existing public sector procurement of AI is “not fit for purpose”. Processes around transparency and fairness are not effective, it says, but are vital for “avoiding vendor lock-in”.

The report advises that “public sector procurers need consistent guidance and terminology to help them buy AI well”.

Governance gaps

Due to problems with governance, the public sector can’t ensure AI tools are safe, effective and fair, the research paper finds.

Public service leaders “need more assurance and governance mechanisms so they can procure, deploy and use AI tools with confidence”.

This would bring AI into line with “other regulated sectors such as pharmaceuticals”and would create a more competitive market for AI tools, the report suggests. This in turn would allow the public sector to “access more reliable and cost-effective tools for its services”, the institute added.

Reimagining the state

The authors of the report see AI “not as an opportunity to automate the public sector, but to reimagine it”. Public service transformation should be “grounded in public and professional legitimacy” which could enable fundamental service redesign, “placing the citizen at the centre of public service delivery”, they argue.

This reimagining of the state through the rollout of AI is an opportunity for public sector leaders, the briefing says. But it will be essential to think “beyond the technology” and avoid “focusing solely on immediate efficiency gains or automating the status quo”, the authors added.

Share this page