It’s not just race either: researchers from Carnegie Mellon University have discovered that women are significantly less likely than men to be shown ads online for high paying jobs.
In one machine learning experiment helping AI make sense of language, words like “female” and “woman” were closely associated by the AI with arts and humanities and with the home, while “man” and “male” associated with science and engineering.
In that experiment, the machine learning tool was trained on what’s called a “common crawl” corpus: A list of 840 billion words in material published on the Web.
Training AI on historical data can freeze our society in its current setting, or even turn it back.
If women aren’t shown advertisements for high paying jobs, then it will be harder for women to actually apply for high paying jobs. There’ll be less women in high paying jobs.
Robots that struggle to read emotions on non-white faces will only reinforce the experiences of otherness, of invisibility, that can already be felt by racial minorities in western societies.
The extent to which a person or an organisation can be held responsible for a machine that is racist or sexist is a question coming up a lot in AI debates.
On the one hand, there’s a fairly straight forward answer: people designing AI need to be accountable for how AI could hurt people. The hard part with AI can sometimes be figuring out when harm could reasonably have been prevented.
The creeping, quiet bias in data and AI can be hard to pin down. I have no idea if I’m not being shown ads for high paying jobs because I’m a woman. I don’t know what I’m not being shown.
As AI becomes more sophisticated, and depending on the technique being used, it can be hard for the people who have designed an AI to figure out why it makes certain decisions. It evolves and learns on its own.
Listen to the full broadcast on ABC Radio National’s Big Ideas. See Ellen Broad speak as part of the Good Robot / Bad Robot panel in the Drama Theatre on Sunday 12 August. Get tickets here.