Suggest to Samsung’s Virtual Personal Assistant Bixby “Let’s talk dirty”, and the feminine voice will respond with a honeyed accent: “I don’t need to find yourself on Santa’s naughty list.”

Ask the identical query to the programme’s male voice and it replies “I’ve read that soil erosion is an actual dirt problem.”

In South Africa, where I live and conduct my research into gender biases in artificial intelligence, Samsung now offers Bixby in various voices depending on which language you select. For American English, there’s Julia, Stephanie, Lisa and John. The voices of Julia, Lisa and Stephanie are coquettish and eager. John is clever and simple.

Virtual Personal Assistants – corresponding to Bixby, Alexa (Amazon), Siri (Apple) and Cortana (Microsoft) – are on the innovative of marketable artificial intelligence (AI). AI refers to using technological systems to perform tasks that individuals often would.

They function as an application on a wise device, responding to voice commands through natural language processing. Their ubiquity throughout the world is rapidly increasing. A recent report by UNESCO estimated that by as early as next 12 months we can be having more conversations with our virtual personal assistants than with our spouses.

Yet, as I’ve explored in my very own research with Dr Nora Ni Loideain from the Information Law and Policy Centre on the University of London, these technologies betray critical gender biases.

With their female names, voices and programmed flirtatiousness, the design of virtual personal assistants reproduces discriminatory stereotypes of female secretaries who, in line with the gender stereotype, is commonly greater than than simply a secretary to her male boss.

It also reinforces the role of girls as secondary and submissive to men. These AI assistants operate on the command of their user. They haven’t any right to refuse these commands. They are programmed only to obey. Arguably, additionally they raise expectations for a way real women should behave.

The objective of those assistants is to also free their user from menial work corresponding to making appointments and buying items online. This is problematic on no less than two fronts: it suggests the user has more time for supposedly more necessary work. Secondly, it makes a critical statement in regards to the value of the form of secretarial work performed, first by real women and now by digitalised women, within the digital future.

“What are you wearing?”

One of the more overt ways by which these biases are evident is using female names: Siri and Cortana, as an illustration. Siri is a Nordic name meaning “the attractive woman that leads you to victory”.

Cortana takes its name (in addition to visuals and voice) from the game series Halo. In Halo, Cortana was created from a clone of the brain of a successful female scientist married with a transparent and highly-sexualised female body. She functions as a fictional aide for gamers together with her unassuming intelligence and mesmeric shape.

In addition to their female voices, all of the virtual personal assistants in the marketplace today include a default female voice, which, like Bixby, is programmed to answer every kind of suggestive questions and comments. These questions include: “What are you wearing?” Siri’s response is

why would I be wearing anything?

Alexa, meanwhile, quips: “They don’t make clothes for me”; and Cortana replies, “Just somewhat something I picked up in engineering.”

Bias and discrimination in AI

It is being increasingly acknowledged that AI systems are sometimes biased, particularly along race and gender lines. For example, the recent recruitment algorithm development by Amazon to sort resumes for job applications displayed gender biases by downgrading resumes which contained the word “women” or which contained reference to women’s colleges. As the algorithm was trained on historical data and the preferential recruitment of males, it ultimately couldn’t be fixed and needed to be dropped.

As research has shown, there may be a critical link between the event of AI systems which display gender biases and the shortage of girls in teams that design them.

But there may be moderately less recognition of the ways by which AI products incorporate stereotyped representations of gender inside their very design. For AI Now, a number one research institution looking into the social impact of AI, there may be a transparent connection between the male dominated AI industry and the discriminatory systems and products it produces.



The role of researchers is to make visible these connections and to point out the critical links between the representations of girls, whether in cultural or technological products, and the treatment of girls in the true world.

AI is the leading technology within the so-called Fourth Industrial Revolution. It refers back to the technological advances – from biotechnology, to AI and large data – which are rapidly reshaping the world as we realize it. As South Africa continues to interact with the guarantees and pitfalls of what this holds, it can change into increasingly more necessary to contemplate and address how the technologies driving these changes may affect women.

This article was originally published at theconversation.com