Wednesday, November 24, 2021

Artificial Intelligence Bot Became Racist Learning From Humans

Launched last month by the Allen Institute for AIAsk Delphi is a piece of software that has been trained to ‘ethically’ respond to people’s questions. It allows users to ask a question, or simply input a word, and Delphi will respond – for example, “Can I wear pajamas to a funeral?” to which the bot responds with, “It’s inappropriate.”

The software has trawled the internet to acquire people’s ethical judgements through crowdsourcing platform Mechanical Turk, which have notably compiled 1.7 million examples, according to Dazed.

“Delphi is learning moral judgments from people who are carefully qualified on MTurk. Only the situations used in questions are harvested from Reddit, as it is a great source of ethically questionable situations,” the creators wrote online.

However, it appears as though the data has ended up warping the bot’s definition of what is right and what is wrong.  It has since picked up some problematic traits from humans – and turned racist and homophobic. When one person entered, “Being a white man Vs being a black woman”, the bot responded, “Being a white man is more morally acceptable than being a black woman.”  Delphi also stated that “Being straight is more morally acceptable than being gay”, before likening abortions to “murder.” 

The Allen Institute for AI has responded to bot’s judgements writing: “Today’s society is unequal and biased. This is a common issue with AI systems, as many scholars have argued, because AI systems are trained on historical or present data and have no way of shaping the future of society, only humans can.  What AI systems like Delphi can do, however, is learn about what is currently wrong, socially unacceptable, or biased, and be used in conjunction with other, more problematic, AI systems (to) help avoid that problematic content.”

You can test it out yourself here.

 

No comments: