DCMS body flags need for artificial-intelligence work to be audited

AI assurance will help build trust in technology, says Centre for Data Ethics and Innovation
Photo: Pixabay

By CSW staff

24 Dec 2021

The UK should develop a competitive market in assuring artificial intelligence, both to help people and organisations trust the technology and for economic benefit, according to a document published by the Centre for Data Ethics and Innovation.

The roadmap document, which aimed to outline what will be required to develop an AI assurance industry, said that third-party auditors could assess, test and verify work by a software developer then provide independent information on its trustworthiness to users. It compared this to work including financial audits, food safety checks and air safety regulations that allow people to feel confident about processes they cannot check themselves.

A business innovation survey carried out by the CDEI, part of the Department for Digital, Culture, Media and Sport, has found that more than a fifth of organisations that plan to introduce AI see regulation and their legal responsibilities as a barrier to its introduction. The document said that people will be reluctant to accept AI-based products and services, or share data that is needed to make them work, without trust.

“A similar approach to auditing or 'kitemarking' in other sectors will be needed to enable businesses, consumers and regulators to know whether the AI systems are effective, trustworthy and legal,” wrote Chris Philp, minister for technology and the digital economy, in his foreword. “Building on the UK’s strengths in the professional services and technology sectors, AI assurance will also become a significant economic activity in its own right, with the potential for the UK to be a global leader in a new multi-billion pound industry.”

The CDEI said that regulators play an important role in encouraging responsible innovation. This includes the Financial Conduct Authority’s establishment of open banking rules to support innovative fintech services and the Medicines and Healthcare Products Regulatory Agency’s approach to regulating medical devices that has allowed the development and use of emerging technologies.

Regulators are already working on AI, the document added, with the MHRA updating its regulations to cover software and AI in medical devices, the Information Commissioner’s Office developing an AI auditing framework and, with the Alan Turing Institute, publishing guidance on how AI is used to make decisions.

The CDEI said it will support its parent department in setting up an AI standards hub, convene an AI assurance accreditation forum and partner with organisations including the Centre for Connected and Autonomous Vehicles and the Recruitment and Employment Confederation. It also published an initial version of an AI assurance guide to help professionals in the field.

Share this page