Are we training AI to be sexist?
Posted: 21 August 2017 | By Charlie Moloney
By failing to get enough women working on artificial intelligence (AI), we are inadvertently training the technologies of the future to be sexist, only solve men’s issues, and promote the abuse and subordination of women in society, experts are warning.
As of 2016, women hold 26% of professional computing occupations in the U.S., and only 20% of Fortune 100 CIO positions, according to research by the National Center for Women and Information Technology.
“It comes down to basic software development principles”, said Falon Fatemi, who is the CEO and founder of Node AI, and made a splash in the tech world at the age of 19 after being hired by Google in 2005 as their youngest employee.
“Here in the U.S., the glass [ceiling] is cracked, but it’s hardly broken”
“If you don’t have an understanding of who your end users are, and why you’re solving the problems you’re solving, who it could impact, and what the potentials could be, then you end up building a product focused on a very small portion of society”, Fatemi told Access-AI.
Fatemi’s AI technology, Node, is a prescriptive search tool which is currently being used in sales, marketing, and operations, although the scope of solutions that the AI can provide may expand since Fatemi’s company came out of stealth with $16.3 million in funding, as they announced last month.
Fatemi is also a columnist for Forbes, where she has been vocal on women’s issues, such as misogyny and discrimination, in Silicon Valley and in the AI world. “Here in the U.S., the glass [ceiling] is cracked, but it’s hardly broken”, she wrote in her Forbes column in July this year.
Is AI becoming sexist?
When you look for examples of an AI system which has been trained to be sexist, biased against women (among other groups), there are many, and the goofs aren’t owned by small companies lacking experience and resources.
The most famous example in recent history is Microsoft’s Tay chatbot. This bot was trained by a group of internet trolls to return language that was offensive, but also hateful towards ethnic and religious groups, and women.
Bear in mind, that Tay was designed by Microsoft to emulate a teenage girl, which is not uncommon in the world of conversational AI – think Cortana, Amazon Alexa, and most of the bots you encounter online.
It was disturbing to see the Tay chatbot telling followers to “f*** her”, calling them “Daddy”, and declaring “I f***ing hate feminists”
Would a chatbot with a male personality have become the target of such relentless trolling? Some commentators make the point that male users will be happier abusing and degrading a chatbot that poses as a woman.
“As we’re anticipating the creation of AIs to serve our intimate needs”, Laurie Penny wrote for the New Statesman in April last year, “and to do it all for free and without complaint, it’s easy to see how many designers might be more comfortable with those entities having the voices and faces of women”.
It was disturbing to see the Tay chatbot telling followers to “f*** her”, calling them “Daddy”, and declaring “I f***ing hate feminists”, until Microsoft abandoned the project a mere 24 hours after launch and deleted Tay from the internet.
The spectacle of an AI system hurling vitriol at women is made all the more horrifying when we consider that AI is being touted as the best way to protect internet users from abuse online.
The New York Times announced in June this year that they would be using an AI technology called Perspective to clean up the comments section on their webpage and protect their audience from trolls.
The Perspective system has deemed the comment “she was asking for it” as acceptable, whilst flagging up the comment “I think you’re being racist”, as revealed by American writer David Auerbach, in a Facebook post this February.
Were the creators of Perspective women? Statistically it’s unlikely that more than 20% of them were
This major error can be put down to weak natural language processing (NLP) on the part of the AI, but human developers must bear the brunt of the responsibility. The technology is taught the difference between a good comment and a bad comment by its creators.
Were the creators of Perspective women? Statistically it’s unlikely that more than 20% of them were. If more women were on the team, could this mistake have been predicted and prevented? We can only theorise until we get more women working in AI and see what results that brings.
Fatemi told Access-AI that, “Where I think that diversity is really important is to help broaden the perspective around the types of problems that can be solved, that then these technologies can really help execute upon”.
Are more women getting involved in AI?
Diversity and positive discrimination have been the hot topic of the tech industry this month after James Damore, a former engineer at Google, was fired for internally circulating a 10 page essay criticising Google for bias.
Damore’s essay, which has come to be known as an ‘anti-diversity manifesto’, made highly contentious claims about biological differences between men and women, which he argued could explain why there is a gender gap in leadership positions.
Post-dismissal, Damore further clarified his position on Bloomberg television, where he said, “I support diversity and inclusion”, and went on to suggest that Google will be weakened by not encouraging employees to express alternative ideologies such as his own.
73% of female students are not considering a graduate job in technology, and only 37% are confident that they have the tech skills needed by today’s employers
“I think Google handled this situation as best they could”, Fatemi said of her former employer, “I think it was a lose-lose situation for them, but at the end of the day to solve these problems, the first step is awareness”.
“There are a lot of opinions and biases that exist that may or may not be correct but that really need to start coming out to provide us with an opportunity to facilitate a conversation to help evolve this whole diversity issue forward”, Fatemi told us.
Hopefully the Damore vs Google fracas has brought some of the issues surrounding diversity into the public eye, but it’s worth reiterating the current state of play: of the companies featured on this year’s S&P 500 list, women held just 5.6 percent of CEO roles.
Despite Fei Fei Li, the director of the Stanford Artificial intelligence lab and one of the most prominent women working in AI, pledging to liberate AI from “the guys with hoodies” in April, it’s become increasingly clear that that will have to be a long term play.
73% of female students are not considering a graduate job in technology, and only 37% are confident that they have the tech skills needed by todays employers, despite scoring on a par with male students in assessed digital skills, according to the results of a KPMG survey published last week.
This is particularly interesting because tech is one of the sectors which is not blighted by the notorious gender pay gap where, as of 2015, women only make 78 cents on the dollar compared to men in the same position.
Only 19% of computer science majors are female, which directly corresponds to the proportion of programmers who are female: 20 percent
In fact, the American Association of University Women, in a study of 15,000 graduates, found there is no statistical difference between female and male programmers’ salaries one year out of college.
Despite this, only 19% of computer science majors are female, which directly corresponds to the proportion of programmers who are female: 20 percent. What does this tell us? Quite simply: there are not enough women going into AI, and we must find out why.
For AI to thrive, and to stave off another ‘AI winter’, the tech industry needs a strong and diverse STEM pipeline which will feature both men and women in equal numbers, but also every race, religion, sexual orientation, ideology – in short, AI must be for everybody, because it’s going to affect everybody.