What’s the one thing Siri, Alexa, Cortana, and Google Assistant have in common?
In short, they’re all female.
At first, this may seem a harmless coincidence, but if we scratch the surface, the domination of the female voice in AI is far more problematic than it may appear.
Today, almost 3 billion people are currently using voice automated software to assist with daily tasks such as setting alarms, checking the weather, or even texting a friend. And the number of people opting to use voice assistants is not expected to decrease.
Yes, we all know this form of AI is beneficial for our ever-evolving digital world, but are these voice assistants causing more harm than we realise?
A greater number of consumers are starting to question the gender bias in voice automation, as they are starting to understand that opting for a female voice can reaffirm the social ideals that women are obedient, should do as they’re told, and most importantly, are here to serve the needs of others.
And because of these observations, many AI creators are now facing criticism for opting for female voices.
With all this in mind, we were intrigued as to why so many companies opt for female voices. Throughout this article, we uncover why there is such a strong gender bias in AI and map out the key steps we can take toward changing this bias.
The lack of male data is the most common reason most programmers opt for female voice when creating voice automated AI, and this is a problem that has been building for some time.
To create voice automation, you need to have a rich set of voice recordings within your text-to-speech system, and most text-to-speech systems have been trained using female recordings. Therefore, it’s much easier and cheaper to create female voice assistants.
So, why do we have so many recordings of female voices available to use?
Up until 1878 telephone operators used to be a solely male-dominated workforce, but they were often heavily criticised for being rude, abrupt, and unhelpful. In response to this Alexander Graham Bell suggested hiring a woman to see if her voice was better received.
Emma Nutt made history and became the first woman to be a telephone operator. Customers responded so well to her voice that she sparked an industry-wide overhaul, and by the end of the 1880s telephone operators were exclusively female.
It’s because of this industry-changing decision we now have hundreds of years of tried-and-tested female audio recordings available to us to create new forms of voice automated AI.
And this leads us on nicely to our next point…
This is probably the most contentious reason why AI programmers opt for female voices – we prefer the sound of female voices.
Many studies have suggested that our preference for the female voice begins when we are in the womb, as these sounds would soothe us even as a fetus.
Another reason why some researchers argue this preference exists is that women tend to articulate vowel sounds more clearly, making female voices easier to hear and understand than male voices.
An example of this can be seen as far back as World War II. Female voice recordings were used in aeroplane cockpits as they were easier to hear over the male pilots.
Up until now, this idea that female voices offer more clarity than a male voices remained unchallenged. Today, many researchers heavily dispute this claim and have found that the many studies that claim female voices are easier to hear when using small speakers or over background noise are simply not true.
There’s even more evidence to show that many people criticise women for their vocal ticks. For example, if you type into Google search “women’s voices are” the top suggested search will finish this sentence with the word “annoying” …
Clearly, AI programmers currently face a difficult challenge when asked to create male voice automation, and Google is a prime example.
Google Assistant was first launched in 2016, and many people were wondering why this AI product was not assigned a gendered name. The reason behind this choice was because Google wanted to launch its new voice assistant with both a male and female voice.
Unfortunately, Google’s text-to-speech system is what ended Google’s desire to launch a virtual assistant with a male and female voice.
Google’s initial text-to-speech system worked by joining segments of audio together from recordings, by using a speech recognition algorithm. It would add markers in different places in sentences to teach the system where certain sounds would begin and end.
As the text-to-speech system was trained using female data, Google Assistant performed better with female voices.
The global engineering manager for text-to-speech at Google, Brant Ward, explained why it was so hard to create a male voice for Google Assistant.
He said that the markers used in their text-to-speech system were not placed accurately for male voice recordings, this meant that it was more challenging to create a male voice assistant of the same quality as the female counterpart.
As it would have taken over a year to create a male voice for Google Assistant, the team at Google decided to run its Google Assistant with only a female voice.
As you can now see, the gender bias in voice automation has become an unshakeable paradigm because of a lack of data and societal acceptance over the preference of the female voice.
When a whole industry is stuck in its ways, even the notion of creating male voice automation can feel like an uphill battle.
It’s time we changed our approach to male voice automation, and there are some simple things we can do as an industry to eliminate the gender bias in voice automation and throughout the AI industry.
1. Inclusivity Is Key
Perhaps it’s obvious to say, but this is an issue that needs to be highlighted. Full stop.
Currently, “women make up an estimated 26% of workers in data and AI roles globally, which drops to only 22% in the UK”. And this percentage drops even further when you look at the number of people in AI who are transgender or non-binary…
It’s disappointing stats like these that show we need to do much more to encourage people of all genders to pursue a career in AI. We want our AI development teams to be more diverse, and this won’t happen if we don’t act.
Once we have a more diverse workforce, we will be able to pinpoint and resolve complex gender issues before and during the production stages of new AI products. Now, in order to attract more diversity, we need to start looking at ways to encourage all genders to follow a career path in AI in higher education.
This can be easily achieved by creating a strong educational foundation, with multiple learning channels available to all students, no matter their gender orientation.
We also need to encourage people of all genders to take an active role in the development of AI course materials. When students see they are being represented in courses they are studying, they are more likely to continue in further education.
2. Develop New Machine Learning Technology
Machine learning technology has come on leaps and bounds in the past few years, and there are now new text-to-speech systems available that create naturalistic male and female voices for AI.
After Google struggled to create a male voice for its virtual assistant, the tech giant joined forces with AI specialists DeepMind to develop a more advanced text-to-speech algorithm that significantly reduced the volume of recordings needed to simulate human voices.
Now known as WaveNet, this algorithm allowed Google to create a more naturalistic voice for all genders, which were then added to Google Assistant in 2017.
Today, America’s version of Google Assistant comes programmed with 11 different voices, and new users are assigned one of two basic voices – one male and one female – at random.
3. Industry-Wide AI Standards Need to Be Made
The domination of AI is not expected to slow down any time soon. In fact, the global market value of AI is expected to reach $267 billion by 2027!
When you think about how AI is becoming an integral part of our society in one way or another, it is shocking to think that there are still no standards in place concerning the humanization of AI.
To this day, most tech companies develop virtual automated systems with a female voice, and this can still enforce the stereotype that women are “assistants”. In order to combat this, we need to have AI standards in place to ensure our products are far more inclusive.
To create these industry-wide standards we must include people of different genders, sexual orientations, races, and ethnicities in the decision-making stages.
With a more diverse group of individuals, we can work together to define what “female,” “male,” “gender-neutral” and “non-binary” human voices sound like and when it’s appropriate to use such voices.
These industry standards should also include a basic set of protocols. Companies would then need to adhere to these rules when creating text-to-speech algorithms to ensure AI products are unbiased and sensitive to potentially harmful gender stereotypes.
It looks like voice assistants will be a part of our lives for the foreseeable future, and because of this, we need to address the gender bias surrounding this type of AI technology now.
Just by opening a discussion about gender representation in voice automation, we can actively begin to create a future of AI that’s more inclusive for all.
Digital performance marketing, analytics & consultancy for growth-minded brands.
You need to load content from reCAPTCHA to submit the form. Please note that doing so will share data with third-party providers.
More Information