"Imagine telling every public sector recruiter: 'We're going to save you 14 hours per week.' That's not just a cost-saving; it's an efficiency breakthrough," says Matt Burney, senior talent strategy advisor at Indeed. His words capture the double-edged sword organisations face as they increasingly turn to artificial intelligence (AI) and automation to streamline their operations.
On one hand, these technologies may enhance inclusivity by reducing human bias and expanding access to opportunities. Yet, on the other hand, the algorithms powering these systems can inadvertently perpetuate and amplify societal prejudices if not carefully designed and deployed. As technological progress continues, organisations must navigate this complex interplay between technology and diversity, equity, inclusion, and belonging (DEIB) efforts with a keen eye towards ethical implementation.
How AI can support inclusion
The potential benefits of AI in enhancing DEIB efforts are manifold. By automating repetitive, high-volume tasks like resume screening and candidate outreach, AI-powered tools can free up recruiters to focus on more personalised, human-centric aspects of the hiring process. This, in turn, can lead to more equitable access to opportunities and a more diverse candidate pool.
"If we start to automate, we can start to think we're giving a better experience to those people who want to come and work in the public sector," says Burney. "AI is driving us to fundamentally rethink and transform the way we engage with both our employees and external stakeholders, ensuring more personalised, efficient, and meaningful interactions."
Burney advocates for organisations to use AI-driven analytics as they can provide organisations with deeper insights into their workforce demographics, identifying areas for improvement and tracking the efficacy of DEIB initiatives over time. "This data-driven approach can help organisations make more informed, evidence-based decisions to foster inclusive environments," he explains.
While AI offers new ways to help reduce human bias, its implementation is not without risks. The same technology enabling fairness can also reinforce inequality if used carelessly. Burney cautions the danger lies in the potential for AI algorithms to perpetuate and amplify existing societal biases, often in subtle ways.
If you train AI on flawed data, you'll get flawed outcomes. The bias may even be more challenging to detect because it's buried in complex algorithms." Examples of biased AI algorithms have already surfaced in the private sector. In 2018, Amazon abandoned its AI hiring tool after discovering that the system penalised resumes from women because past hiring patterns were male-dominated. This incident sparked widespread industry reflection on AI ethics, prompting many companies to rethink how they design and audit AI systems to avoid similar failures.
The path forward
Burney underscores the importance of robust governance for AI systems in recruitment, advocating for regular testing and monitoring to ensure fairness and transparency. "Embedding accountability measures into AI processes is crucial to prevent unintended consequences and to build trust in these technologies," he explains.
The UK's Equality and Human Rights Commission has highlighted the need for clear guidance on AI use in employment practices. Recommendations include encouraging organisations to disclose insights into AI-driven decision-making processes and promoting regular reviews to identify and address potential biases in recruitment algorithms.
Burney also suggests that public sector organisations explore collaborative opportunities with universities and research institutes to leverage expert knowledge and innovation in developing inclusive AI systems. "By fostering partnerships and adopting best practices, we can work towards creating AI tools that support fairness and opportunity for all," he adds.
The future of AI-driven inclusion
Despite its risks, AI's potential for driving meaningful change in DEIB efforts remains significant when coupled with the proper safeguards. To Burney, public sector leaders must view AI as a complement to, rather than a replacement for, human judgment. "AI can help level the playing field, but it can't replace empathy, cultural understanding, or lived experience," he says. "Those are human qualities that no algorithm can replicate."
By regulating AI use, investing in ethical tech development, and committing to transparency, government organisations can ensure that AI serves as a force for inclusion rather than exclusion. As Burney puts it: "The real challenge isn't whether to adopt AI but how to shape its future; one rooted in fairness, equality, and opportunity for all."