Skip Links
Close Search
Ariel and 'Fiona' from HBO's Silicon Valley. Image: Ali Page Goldstein
Ariel and 'Fiona' from HBO's Silicon Valley. Image: Ali Page Goldstein

Racist robots

If we want AI to learn from our mistakes we need to teach them, says Ellen Broad

Below is an excerpt from a talk to the Cranlana Programme by Ellen Broad, data expert and former Head of Policy for the Open Data Institute. Hear the full interview on ABC RN's Big Ideas.


There are lots of stories about AI getting into trouble.

The social media chatbot that quickly becomes horrifically sexist and racist. Updates to Google Photos that accidentally see black people identified as gorillas. The camera that recognises images of Asian faces as people blinking.

These kinds of glaring problems are typically picked up quickly. But sometimes the issues training AI out of biases and prejudice can be more insidious, and more troubling.

Joy Buolamwini, a computer science researcher at the MIT Media Lab in the US, has spoken about issues she’s had as a researcher getting robots to interact with her: To recognise her face, to play peek-a-boo.

But when Joy, who is black, puts a white mask on over her face, the robots can see her. The problem here is poor data being used to train a robot about what faces look like.

Ellen Broad at the Open Data Camp 2016. Image: W.N. Bishop
Ellen Broad at the Open Data Camp 2016. Image: W.N. Bishop

Facial recognition software learns faces from big datasets of images of faces. If the images in what is called your ‘training data’ aren’t diverse, then the software doesn’t learn to recognise diverse faces.

A bit like humans, really. AI is shaped by its environment just as we are. It’s impressionable. And so we need to take care not to encode biases within machines that we’re still wrestling with as humans.

In 2016, the first international beauty contest judged by AI — and which promoted itself as analysing ‘objective’ features like facial symmetry and wrinkles — identified nearly all white winners.

In the US, sentencing algorithms are being developed to predict the likelihood of people who have been convicted of crimes reoffending and so to readjust sentencing. One of these algorithms was found to falsely flag black defendants as future criminals at twice the rate of non-black defendants.

Peter Mares and Ellen Broad at her Cranlana Programme event. Image: Cranlana Programme / Twitter
Peter Mares and Ellen Broad at her Cranlana Programme event. Image: Cranlana Programme / Twitter

It’s not just race either: researchers from Carnegie Mellon University have discovered that women are significantly less likely than men to be shown ads online for high paying jobs.

In one machine learning experiment helping AI make sense of language, words like “female” and “woman” were closely associated by the AI with arts and humanities and with the home, while “man” and “male” associated with science and engineering.

In that experiment, the machine learning tool was trained on what’s called a “common crawl” corpus: A list of 840 billion words in material published on the Web.

Training AI on historical data can freeze our society in its current setting, or even turn it back.

If women aren’t shown advertisements for high paying jobs, then it will be harder for women to actually apply for high paying jobs. There’ll be less women in high paying jobs.

Robots that struggle to read emotions on non-white faces will only reinforce the experiences of otherness, of invisibility, that can already be felt by racial minorities in western societies.

The extent to which a person or an organisation can be held responsible for a machine that is racist or sexist is a question coming up a lot in AI debates.

On the one hand, there’s a fairly straight forward answer: people designing AI need to be accountable for how AI could hurt people. The hard part with AI can sometimes be figuring out when harm could reasonably have been prevented.

The creeping, quiet bias in data and AI can be hard to pin down. I have no idea if I’m not being shown ads for high paying jobs because I’m a woman. I don’t know what I’m not being shown.

As AI becomes more sophisticated, and depending on the technique being used, it can be hard for the people who have designed an AI to figure out why it makes certain decisions. It evolves and learns on its own.

Listen to the full broadcast on ABC Radio National’s Big Ideas. See Ellen Broad speak as part of the Good Robot / Bad Robot panel in the Drama Theatre on Sunday 12 August. Get tickets here.

You may also like...


Can mutant food save our planet?

Glow-in-the-dark popcorn, Instaworthy tuna and other ideas to make your everyday eating more enjoyable.


Live music tour of the House

Over the years, we’ve invited well renowned musicians into the belly of the beast to show us some of the wonders that lie beneath the Sydney Opera House shells.