Hiring is generally featured as a first-rate example for algorithmic bias. This is where an inclination to favour some groups over others becomes unintentionally fixed in an AI system designed to perform a particular task.

There are countless stories about this. Perhaps the perfect known example is when Amazon tried to make use of AI in recruitment. In this case, CVs were used as the info to coach, or improve, this AI.

Since many of the CVs were from men, the AI learned to filter out anything related to women, similar to being the president of the ladies’s chess club or a graduate from a women’s college. Needless to say that Amazon didn’t find yourself using the system more widely.

Similarly, the practice of filming video interviews after which using an AI to analyse them for a candidate’s suitability is commonly criticised for its potential to provide biased outcomes. Yet proponents of AI in hiring suggest that it makes hiring processes fairer and more transparent by reducing human biases. This raises an issue: is AI utilized in hiring inevitably reproducing bias, or could it actually make hiring fairer?

From a technical perspective, algorithmic bias refers to errors that result in unequal outcomes for various groups. However, somewhat than seeing algorithmic bias as an error, it may well even be seen as a function of society. AI is usually based on data drawn from the actual world and these datasets reflect society.

For example, if women of color are underrepresented in datasets, facial recognition software has the next failure rate when identifying women with darker skin tones. Similarly, for video interviews, there may be concern that tone of voice, accent or gender- and race-specific language patterns may influence assessments.

Multiple biases

Another example is that AI might learn, based on the info, that folks called “Mark” do higher than people named “Mary” and are thus ranked higher. Existing biases in society are reflected in and amplified through data.

Of course, data will not be the one way during which AI-supported hiring could be biased. While designing AI draws on the expertise of a range of individuals similar to data scientists and experts in machine learning (where an AI system may be trained to enhance at what it does), programmers, HR professionals, recruiters, industrial and organisational psychologists and hiring managers, it is usually claimed that only 12% of machine learning researchers are women. This raises concerns that the group of individuals designing these technologies is somewhat narrow.

Machine learning processes may be biased too. For instance, an organization that uses data to assist firms hire programmers found that a powerful predictor for good coding skills was frequenting a specific Japanese cartoon website. Hypothetically, for those who desired to hire programmers and use such data in machine learning, an AI might then suggest targeting individuals who studied programming at university, have “programmer” of their current job title and like Japanese cartoons. While the primary two criteria are job requirements, the ultimate one will not be required to perform the job and and subsequently mustn’t be used. As such, the design of AI in hiring technologies requires careful consideration if we’re aiming to create algorithms that support inclusion.

Impact assessments and AI audits that check systematically for discriminatory effects are crucial to make sure that AI in hiring will not be perpetuating biases. The findings can then be used to tweak and adapt the technology to make sure that such biases don’t reoccur.

Careful consideration

The providers of hiring technologies have developed different tools similar to auditing to ascertain outcomes against protected characteristics or monitoring for discrimination by identifying masculine and female words. As such, audits generally is a great tool to guage if hiring technologies produce biased outcomes and to rectify that.

So is using AI in hiring leading inevitably to discrimination? In my recent article, I showed that if AI is utilized in a naive way, without implementing safeguards to avoid algorithmic bias, then the technology will repeat and amplify biases that exist in society and potentially also create recent biases that didn’t exist before.

However, if implemented with a consideration for inclusion within the underlying data, within the designs adopted and in how decisions are taken, AI-supported hiring might the truth is be a tool to create more inclusion.

AI-supported hiring doesn’t mean that the ultimate hiring decisions are or needs to be left to algorithms. Such technologies may be used to filter candidates, but the ultimate hiring decisions rests with humans. Therefore, hiring may be improved if AI is implemented with attention to diversity and inclusion. But if the ultimate hiring decision is made by a hiring manager who will not be aware of how you can create an inclusive environment, bias can creep back in.

This article was originally published at theconversation.com