New Times,
New Thinking.

AI in recruitment: Should robots be put in charge of hiring?

Experts discuss whether using artificial intelligence to fill job vacancies is a step towards better diversity or simply exacerbates existing bias.

By Sarah Dawood

After applying for a senior strategy role at a leading car brand, Tom Arthur was informed that to progress to the next stage he would need to do an automated video interview within 72 hours, leaving him little time to prepare. This consisted of 30 seconds to read a question followed by two minutes to answer. He failed to make it to the next stage and received no feedback, going through the entire application process without interacting with a human.

“It felt incredibly impersonal and trivialising not being able to speak to someone, especially given a key element of the role was stakeholder management,” he tells Spotlight. “It was like being treated like a number with no real care for people’s emotions or time.”

Arthur’s experience is an extreme one but the use of artificial intelligence (AI) in recruitment is becoming increasingly common. While automated interviews are not yet mainstream, more than 95 per cent of Fortune 500 companies use an applicant tracking system (ATS) – software that sorts and filters job applications. This means that many CVs are never actually seen by a person.

Automation is also used to undertake psychometric tests, to run online chatbots that respond to applicants’ questions and to target job adverts to relevant candidates. Some companies have even started using data analysis beyond the recruitment process, to personalise inductions and to match people to internal job opportunities. But to what extent should AI be used to decide people’s career trajectories?

More efficient and targeted recruitment

Leaving hiring decisions to a computer might seem foreboding. But incorporating AI into recruitment can create a more seamless experience for both employer and candidate, improving efficiency, saving time and helping to source the best applications.

“Big Four” accounting firm PwC is the largest private sector recruiter of graduates in the UK. Every year, it receives roughly 50,000 applications for 1,200 graduate roles, meaning there are over 40 people competing for every place. It uses automation for application scanning, psychometric testing and video interviews, helping to whittle down candidates, while also removing human error and unconscious bias from the equation, says Rob McCargow, director of artificial intelligence at PwC.

“Otherwise, you’ve got a team of humans pressing the reject button thousands of times a year,” he says. “Fundamentally, that’s inefficient and can be subject to bias and issues such as cognitive dissonance.” Cognitive dissonance is the mental discomfort that can be caused by holding two conflicting opinions – for instance, that diversity is important but also that a certain demographic might be better for the job.

Give a gift subscription to the New Statesman this Christmas from just £49

PwC is not alone; many global companies have reported positive results from automation. Consumer goods giant Unilever, which receives a quarter of a million applications for 800 places on its Future Leaders Programme, has saved £1m, cut recruiting time significantly and increased diversity by 16 per cent.

Catherine Breslin, founder at AI consulting company Kingfisher Labs, agrees that ATS, as well as job ads targeted to those with suitable skillsets, not only help employers but also ensure candidates’ applications get seen – albeit, by virtual eyes.

“Nowadays, everybody is competing in a much bigger pool,” she says. “As a potential employee, it can be difficult to filter down jobs, and as an employer it can be hard to find interested candidates. It helps people find roles they would not have otherwise come across.”

When combined with other tech, AI can also help to make the recruitment process more palatable and make coveted companies more accessible to those from disadvantaged backgrounds. At PwC’s careers fair, graduates play virtual games that prove they have the problem-solving skills to work there, giving them greater confidence in their abilities. The data from the game is analysed and blended with their existing application to create a “richer picture” of the candidate, says McCargow.

“In theory, [AI tech] can open people’s eyes to opportunities they might have previously pre-qualified out and help them navigate the jobs market more efficiently,” he says.

AI is not exempt from bias

In combination with name-blank recruiting – where identifying details are removed from an application – AI can go some way in eliminating bias. But automated systems are not exempt from prejudice – they are designed by humans, after all, and trained on historical data, which is not free from societal discrimination.

Even by removing identifying details, other information on someone’s CV – such as their hobbies – might be an indicator of their identity. By training an AI system on old data, it can start to associate typically “male” traits as an indicator of a “good” candidate, for instance.

One example of this is Amazon famously having to ditch its AI recruiting tool in 2018 after it discovered that it downgraded women, penalising those with details such as “women’s chess club” because it had been trained on CVs submitted over a ten-year period, the majority of which were from men.

This becomes particularly insidious when we look at automated assessment. AI-powered video interviews analyse not only the words people say but their body language, facial expressions and tone of voice, calling into question whether the algorithm might start to preference those with a certain accent or who look a certain way. Algorithms have also been skewed by irrelevant information, such as preferencing video interviews featuring a clean, white wall and downgrading those with “cluttered” backdrops.

Incidents like this show that AI is not yet as smart as we think and still lacks the vital cognition ascribed to humans. This is made more problematic by “automation bias” – the trend for people to rely too heavily on computerised systems.

“If a computer makes a decision for you, people just tend to trust it,” says Jo Stansfield, founder at Inclusioneering, a company that helps tech businesses improve their diversity and inclusion. “They don’t necessarily think critically about it. For instance, when I do my online shopping, the system might suggest I buy other items. That’s fair enough – but if it’s going to impact people’s careers, that’s a lot more important.”

The tendency for an algorithm to go off course – known as “model drift” – essentially means that these systems need constant human surveillance; otherwise, a company risks a chain reaction that gets gradually further away from the desired result.

“As soon as a model is live and deploying, the corpus of data is growing,” says McCargow. “You have to monitor it perpetually, to ensure that it’s not leading to erroneous outcomes and creating potential harm or discrimination. We’ve always got humans in the loop at PwC because we don’t yet believe there is the governance in place to fully liberate these technologies.”

Automation and privilege

Ironically, while automated systems look to diversify, they can have the opposite effect of exacerbating privilege, as those with access to more resources can find it easier to navigate the system.

Saurav Dutt, an author and marketing consultant working in the legal and financial services industries, spent years applying for jobs he was qualified for and not getting an interview. After experimenting with his CV format and speaking to recruitment consultants, he discovered that he was dealing with ATS and would need to incorporate more search engine-optimised keywords to be successful – verbs such as “achieved” and “actioned”, and phrases like “project management” and “lead generation”.

He went on to work at recruitment agency Reed, where he learnt more about how an ATS filters applications. This cumulative experience helped him hone his CV and he saw a big difference in his success rate. “The jobs I’m considered for have now expanded in scope,” Dutt tells Spotlight. “The salaries and the quality of roles have gone up, and I put it down to understanding what the AI systems are looking for.”

But while the system is designed to filter out those without relevant skills, many people without access to this insight are being left behind. In the legal sector, there are even paid-for services where applicants can get optimised answers written for them. “Some people don’t have the money or resources for CV appraisals, yet they are extremely quality candidates,” says Dutt. “So in a sense, [automation] can be unfair because some people with excellent experience are losing out.”

How to tackle potential issues

So how can AI be used ethically? Transparency is essential, says McCargow – companies should state clearly on their websites what parts of the recruitment journey are automated, empowering candidates to make their own decisions around whether they want to apply.

“Any responsible employer should very clearly signpost on their website exactly what technology is being used, and how candidates can challenge the decision-making,” he says. “It’s a basic duty of care.”

Team diversity is crucial, he adds – companies using these systems should ensure teams span a wide range of demographics and disciplines, from data science to HR and mental health.

Businesses also have a responsibility to question their recruitment software providers about the efficacy of their product, says Breslin of Kingfisher Labs – how the system was built, what data sets were used to train and test it, and how someone could challenge a decision it had made. Regularly testing the system against new data can also help prevent an algorithm becoming outdated, and ensure it responds to cultural shifts. 

The future of AI in recruitment

For now, the use of AI in hiring carries risks as well as rewards. While there are huge opportunities, if the tech is not carefully monitored, it can exacerbate inequality rather than tackle it. Strong regulation is essential to ensure it is used safely. The European Commission released its Artificial Intelligence Act this year, which sets out how to mitigate harm from AI while still encouraging innovation. While its scope covers the EU, it is expected to extend to much of the world, including the UK, given the global nature of tech trade.

As automation becomes more sophisticated, with regulation and careful curation it may reach a point where it can be left largely to its own devices and produce desirable results. Its advancement could even make applying for a job a much more tailored – and enjoyable – experience.

“Recruitment could seamlessly link with HR tech, so that once you’ve been hired, your job is personalised to suit your working style,” says McCargow. “So rather than 1,000 foot soldiers doing the same job, it could empower people to consume work on their own terms, leading to better career satisfaction, productivity and performance.”

This article originally appeared in our issue The Future of Work: AI and Automation.

Content from our partners
Building Britain’s water security
How to solve the teaching crisis
Pitching in to support grassroots football