How AI is filtering millions of qualified candidates out of the workforce

AI is filtering millions of qualified people out of the work force.
AI is filtering millions of qualified people out of the work force. Copyright Euronews
Copyright Euronews
By Kal Berjikian
Share this articleComments
Share this articleClose Button
Copy/paste the article video embed link below:Copy to clipboardCopied

Some applicants use tricks such as 'white fonting', or copying and pasting job advert into resumes, hiding them from human eyes to try to get past AI bots. But why are people doing it and does it work?

ADVERTISEMENT

Scroll through social media for long enough, and it won’t take long to find influencers spouting tricks and tips on how their viewers can land their dream jobs. They just have to get past the AI screening of their applications first.

Their advice is the byproduct of a real-life concern - that qualified candidates could be filtered out of the hiring process before their applications are seen by human eyes.

The use of technology like ATS, or applicant tracking system, is prevalent. According to the study ‘Hidden workers: untapped talent’ by Harvard business school, 99% of Fortune 500 companies use ATS when looking for new hires. And 63% of surveyed countries across Germany, the United States and the United Kingdom do the same.

According to Manjari Raman - one of the researchers behind that study and the Senior Program Director Managing for the Future of Work Project at Harvard Business School -  companies turn to these automated systems because they are sometimes flooded with applications. 

"But when that automated system has the responsibility of taking thousands of candidates and filtering it down to the top five choices ... Well, then what happens is now the technology is hiding workers who could work in that position, high skills or middle skills, and that's a problem," she explains.

The qualified workers left behind

There is also a well-documented dark side to this technology - sometimes being qualified is not enough to land a job interview.

In 2018, Amazon realised the hiring software it was developing for four years was scoring qualified female candidates below their male counterparts.

The reason for this was simple. The AI was trained on the company’s previous hiring track record, and since men dominate the tech industry, it decided that male candidates were preferable to female ones.

That same year, auditors of another screening tool found that the software ranked people with the name Jared and a history of playing lacrosse in high school more favourably than other applicants.

AP Photo
Customers can optimise their ad targeting during high-traffic timeframes, lowering the cost per application.AP Photo

According to Kerry McInerney, a Research Fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, AI can even perpetuate discrimination when its developers design it to do the opposite. 

"One of the claims that companies make about AI-powered hiring tools is that, unlike a human recruiter, an AI-powered tool doesn't see gender and doesn't see race or other characteristics about us," McInerney told Euronews. 

"But I'm really sceptical of this idea that technologies are inherently more objective than human recruiters because ultimately they're trained on the same biased data produced by human recruiters." 

She added that because of this, many companies are "putting their resources into purchasing tools that don't work rather than investing in tried and true diversity and inclusion strategies that we know do work." 

How smart is AI?

According to the previously mentioned Harvard study, there are also millions of people across Europe and the US classified as 'hidden workers', or qualified people who are filtered out of the application process because of things like large gaps in their resumes. 

ATS can also reject applicants because of lengthy and wordy job postings. 

"ATS systems, like almost all forms of artificial intelligence don’t think. They don’t reason. They're not smart in the way humans think of intelligence," Joseph Fuller, a Professor of Management Practice at Harvard Business School, told Euronews. 

"Quite a lot of the problems with artificial intelligence in hiring actually fall at the feet of the employer, not the technology. 

"Job descriptions are ingested by the way they are written, and the technology takes a language in that job description and more or less treats it as scripture.”

ADVERTISEMENT
Euronews
The study by Harvard Business School revealed that many companies know that qualified candidate are being screened outEuronews

For example, we at Euronews recently tried to see how hireable one of our journalists actually was when applying for a job that was comparable to a position they were already doing.

We ran their resume through Jobs Scans, a website that claims to help people get past ATS screeners. And we asked it to rank that person as a possible candidate for an actual job posting.

But they were ranked as a low candidate because the job asked for international experience, and the ATS screener thought the journalist didn’t meet this requirement despite them previously having worked in five different countries.

"In this instance. I think the AI was confused. It doesn't view living in a country as the same as travelling,"  Fuller said. "So if the candidate had said, 'I have travelled extensively while being based in five different countries'. My guess is it would have come to a different conclusion." 

Does ‘white fonting’ actually work?

Some people are trying to bypass this hurdle by ‘white fonting’, or copying and pasting a job post into their resume in small font and hiding it from the human eye by changing the colour to white.

ADVERTISEMENT

The idea behind this is that while the recruiter won’t be able to see it, the AI screening software would. And their CV would get spit out as a possible contender for the role.

But despite videos claiming 'white fonting's' success rate, this is more "myth" than fact.

"This is now a bit of an urban legend," Fuller said. "More recruiters in large companies are scanning a whole application and then changing all the text [to a different colour] so 'white fonting' will be exposed. 

"Also, if your actual job history doesn't fit very well with the requirements of the job, and you're bluffing your way into the interview process, you're likely, in fact, not to succeed." 

Instead of ‘white fonting’, Fuller suggests that hopeful employees look at LinkedIn profiles of people already doing their desired job at the relevant company and replicate how they describe their skills and position.

ADVERTISEMENT
AP Photo
The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPTAP Photo

According to Gracy Sarkissian, the Executive Director at the Wasserman Center for Career Development at NYU, "candidates may also take advantage of new tools like ChatGPT to support their job search. 

"ChatGPT can enable candidates to identify potential job titles and opportunities, analyse job postings to help them determine what skills to highlight, predict interview questions, translate application materials to different languages, and provide nuanced salary information." 

And for the some 27 million 'hidden workers' in the US and the other five million in the UK and Germany, Fuller recommends that they could try to get past the AI bots by closing the gap in their resumes.

For example, he suggested they could return to the workforce by finding gig work, part-time employment or by studying for a course while they look for a new job.

Trying to regulate AI

But, according to Raman, beyond that, there is actually very little that a person in a similar position to 'hidden workers' can do if they continue to be screened out of the hiring process. 

ADVERTISEMENT

"Employers have very little ability or power or agency, they are the ones who are suffering because of this problem," she said. 

"But in the case of employers, it's a self-inflicted wound and employers are the only ones who can make any changes." 

Some regions and countries are trying to address this power imbalance by moving to regulate this ever evolving technology. 

European Union officials are working on groundbreaking rules to regulate AI that could become the de facto standard for global countries because of the size of the 27 nation bloc and its market.

In the United States, New York City is working on a law that would require companies to inform candidates that their applications are being screened by AI. And Illinois enacted a law requiring companies to notify people that their applications will be screened by AI and obtain their consent.

ADVERTISEMENT

China is also drafting regulations requiring security assessments for any products using AI, while the UK's competition watchdog has opened a review of the market.

Share this articleComments

You might also like

AI’s threat to authors a hot topic at the Frankfurt Book Fair

Artificial intelligence: Google says it is developing new tools to help journalists

WHO says artificial sweetner aspartame is a 'possible cause of cancer' - but still safe