During its 21 years in existence – and notwithstanding all its incredibly important work during that time – the Medicines and Healthcare products Regulatory Agency has rarely been among the most high-profile parts of government.
That all changed during the latter half of 2020. By 2 December, the MHRA was ready to issue its first approval for a Covid-19 vaccine and, within a week, the first jab had been administered and the UK had become the first country in the world to start a mass programme of immunisation.
The efforts of the medical regulator were a crucial part of the delivery of a vaccine rollout that was widely praised for its effectiveness and efficiency. But the initiative also brought more negative attention. In July 2020, ministers and security authorities accused Kremlin-backed cybercriminals of launching “despicable” attacks on various organisations involved in the development and delivery of vaccines. It seems they were far from the only ones.
Claire Harrison, chief digital and technology officer at the MHRA, tells PublicTechnology that “around the time of the vaccine rollout, MHRA was regularly in the top five or even top three targets” in the country for cyberattackers. Even in an ordinary month, the organisation – like many government agencies – frequently attracts the attention of hostile actors, facing an average of 25,000 cyber alerts.
It is no surprise then that cyber is one of three “strategic themes” running through the watchdog’s digital and technology work. It sits alongside tackling legacy tech and pursuing innovation and, according to Harrison, the three areas work symbiotically. “You can have all the AI and cutting-edge technology you want – and we’re doing lots of great work in that space – but cyber then needs to not only keep up with it, but needs to be in front of emerging tech to be able to protect the organisation,” she says.
To this end – and with hostile actors deploying automation to industrialise their attacks – the MHRA is experimenting with using AI to help it stay one step ahead. The regulator is currently engaged in exploring about 20 use cases for AI technology, around one in three of which relate to boosting cyber defences. “It’s around identifying threats really, really quickly, and finding our vulnerabilities really quickly as well, because we’re trying to counteract the way that AI is used [against] us,” Harrison says.
Risks and rewards
Across all of the MHRA’s investigations of AI, the digital chief says there is a “spectrum of risk” for potential use cases, broadly split into three categories: productivity; decision-support; and decision-making. The first of these largely comprises generic uses that could be applied to the back-office functions and other processes of any organisations, in any sector.
At the opposite end of the spectrum are entirely autonomous decision-making use cases, which the MHRA is not currently exploring. Harrison describes this kind of use as “the scary stuff” that the vast majority of organisations – and not just those in the necessarily
risk-averse world of regulators – would currently steer clear of.
“But in the middle we have decision support,” she adds. “That is about teaching machines to make decisions and using different types of AI and other technologies to enable that – but all the time, or at regular points, there’s human interaction. And then the human makes the final decision – but based on the evidence that’s been gathered and assimilated with the help of AI.”
An example of such a potential use case currently being explored would be in the MHRA’s enforcement operations to crack down on counterfeit medicines or products, where automated technology could help detect those operating across the vast online marketplace.
Harrison says: “We work with the National Crime Agency and local authorities, but we’re forever trying to find and then shut down dodgy websites – that’s quite an arduous task, especially manually, when it’s just humans who are looking and reporting and dealing with it.”
Another possible deployment under examination is in the watchdog’s work to assess the approximately 1,000 applications that it receives in a typical year to conduct clinical trials for new medical products.
The MHRA is exploring how AI could be used to scour historic applications and detect common factors in those that were not approved – known as a grounds for non-acceptance (GNA) decision. This intelligence could then be provided onto assessors to support their decision – particularly in cases where there is a clear and simple reason to issue a GNA ruling.
“A human would decide, ultimately, but a lot of the work that’s been done to that point [can be performed by] machines,” Harrison says. “The benefits there are not only the time it saves, but the consistency as well.” She adds: “It’s not going to make anyone redundant, but it could help staff by enabling them to focus on the really difficult cases that are extremely complex and do require a lot more human intervention.”
Legacy and learning
The other of the regulator’s three key digital themes – tackling legacy – goes beyond ripping out and replacing the systems, according to Harrison.
“Whenever we’ve got legacy tech, legacy ways of working, legacy commercial agreements, all of those are blockers,” she says. “It’s a continuous programme, because what we need to do as well is make sure that we’re not creating the legacy tech of the future. So, we are really careful with our tech strategies and our tech selection and who selects our tech, but also the way that it’s maintained and the budget for it, and the interoperability aspects.”
To support all of its technology objectives, the MHRA – and Harrison herself – have focused in recent years on engaging with other government agencies, as well as counterparts overseas. Among those with which the regulator has joined forces is Swissmedic – Switzerland’s equivalent to the MHRA. The two parties are engaged in a “continuous collaboration”, including a joint hackathon exercise taking place with regulators from around the globe a few days after PublicTechnology talks to Harrison, and in which “our teams will be sitting down together and coding together”.
The two organisations have made comparative progress in differing areas – for example Swissmedic is advanced in its use of AI to tackle counterfeit products, while the MHRA has made gains in using tech to help assess planned clinical trials – meaning it is “expediting learning for each of us”, according to Harrison. The digital and tech chief sits at the head of a workforce of about 120 people – and she hopes to make the case to grow this figure.
Any new recruits will join a team led by someone whose career experience as a “really hands-on techie” continues to manifest both in her professional life and beyond. She says: “I still love technology and I still do it in my spare time for my friends – I fix their laptops and I create websites for them, for example: I start with a blank notepad and type all my coding – I’m old school: I don’t use WordPress or anything! I continue to be fascinated by the art of the possible – which can be terrifying, but exhilarating as well. It’s still massively interesting, and it’s still the place to be.”
This article originally appeared in the autumn issue of CSW's quarterly magazine