The government has been warned not to let "big players" in the tech industry "dictate" how its artificial intelligence policy should look.
Alex Davies-Jones, shadow minister for digital, culture, media and sport, has called for urgent regulation is in response to a damning new report by the Ada Lovelace Institute.
The research institute’s report, titled ‘Regulating AI in the UK’, was published on Tuesday and identified a number of areas where government proposals could fail to ensure AI is used safely across many areas of business and society.
Prime minister Rishi Sunak has vowed to make the UK a “science superpower” and the government set out its pro-innovation vision for AI with a white paper in June. MPs have previously warned the government against “sleep-walking” into a dangerous position with AI if it does not urgently regulate, and while the white paper set out “guidelines” for the use of AI, it has not proposed any new legislation and the government instead intends to use existing regulatory frameworks to mitigate potential AI harms.
The findings of the new report suggest that existing frameworks will not be enough to protect people from harm, and said large technology companies have too much power in the development of government policy on the subject.
The government is keen to portray itself as a world leader on AI: the foreign secretary chaired the first UN meeting on AI last Tuesday, a cabinet minister gave a speech on adapting the civil service to modern technologies on Wednesday, and in September, the UK will host the first global summit on AI to discuss how it can be developed and adopted safely.
Davies-Jones told PoliticsHome she was concerned that “big players” in the tech sector would be “dictating” discussions at the summit and having too much influence over government policy.
“Those involved in this summit need to be people from civil society; it has to be researchers, academics, those who speak for the country and for people,” she said.
“If we haven't got those voices represented, then of course it's going to be skewed, of course it's just going to be the big players dictating what the regulations should be within their favour; not what's best for the public, not what's best for society at large.”
Speaking at a webinar to launch the report, UK public policy lead at the Ada Lovelace Institute Matt Davies said industry players were exerting heavy influence on government policy.
“The government is largely reliant on external expertise from industry in order to understand these systems, both trends and the industry as a whole, also specific systems and specific models,” he said.
“Obviously, dialogue with industry is always going to be an important component of effective governance, but we think there's some risks and over-optimising regulation to the needs and perspectives of incumbent industry players.”
Experts set out a number of recommendations in the Ada Lovelace Institute report for the government’s regulation of AI, including three main tests relating to coverage, capability, and urgency: how well protections extend across different areas where AI is deployed, how well resources regulators are to carry out their duties in relation to AI, and pushing for urgent action from government to respond to accelerating AI adoption.
Recommendations included that the government should explore the value of establishing an ‘AI ombudsman’ to support people affected by AI, introduce a statutory duty for legislators to comply with the government's principles on AI, increase funding to regulators for responding to AI-related harms, and create formal channels to allow civil society organisations to meaningfully feed into future regulatory processes – to ensure it is not only tech corporations that are able to do so.
To develop the report, legal experts used example scenarios to demonstrate how existing legal frameworks might fail to identify and tackle bias or discrimination carried out by AI.
Alex Lawrence-Archer, a data rights lawyer who contributed to the Ada Lovelace report, explained that in some sectors such as employment and recruitment, there are no existing dedicated regulators, and that many other sectors are covered by a patchwork of different regulators that lack the resource to carry out their duties.
Although some AI harms in the workplace, for example, might be covered by existing laws such as GDPR and the Equality Act, he argued that there are insufficient structures in place to alert victims that illegality has taken place and to enforce protections.
“Something merely being technically unlawful does not provide effective protection to individuals from AI harms, particularly as they multiply and change,” Lawrence-Archer said.
“You need regulatory requirements about what controllers and decision makers are allowed to do and those requirements need to be enforced by strong regulators who have powers and are willing to use them.
“You need rights of redress and importantly, you need realistic avenues in which to enforce those rights as is realistic for ordinary people.”
Conservative MP and former justice secretary Robert Buckland welcomed the findings of the report, and told PoliticsHome there are still a number of potential legal issues surrounding the advancement of AI.
“It's vitally important that the government doesn't end up having just a dialogue with the larger players in this field,” the former barrister said.
“[The report] is edging on a really important point on transparency: If you are seeking the judicial review of a government decision at the moment, there will be a traceable line and a minister deciding things; you can work out how the decision was made. But if the decision was made by machine, how do you review something that is inscrutable?
“That's why I think the setting up principles about transparency of decision-making process are going to be very important if we are to maintain that accountability, it has to underlie the processes of democratic government.”
Davies-Jones also expressed her frustration that the government is not proposing new legislation to cover AI, and said she will for it to take“urgent” action.
She accused the government of “talking a good game” but doing little to actually protect people.
“The Internet Watch Foundation [has] highlighted the first ever child sexual abuse imagery created by AI,” Davies-Jones continued.
“We've got all these issues around deep fakes, fraud, scams, the threat to democracy, everything… There needs to be some sort of regulation put in place urgently.
“We can't be on the backfoot with this because if we are too late to get to grips with it, then it will run away with us and we will be too slow yet again. The speed and urgency of which we're talking about tackling some of these issues isn't actually met with parliamentary action, and that's what's frustrating.”
A government spokesperson said: “As set out in our AI White Paper, our approach to regulation is proportionate and adaptable, allowing us to manage the risks posed by AI whilst harnessing the enormous benefits the technology brings.
“We have been clear that we will adapt the regulatory framework and take targeted further intervention where we see risks that need to be mitigated.
“We have also announced £100m in initial funding to establish the Foundation Model Taskforce to strengthen UK capability and ensure safety and reliability in the specific area of foundation models and will host the first major global summit later this year which will allow us to agree targeted, rapid, internationally co-ordinated action to allow us to safely realise AI’s enormous opportunities."
This story first appeared on CSW's sister publication PoliticsHome