Alexa Urges User To ‘Kill Foster Parents’ In Unsettling Incident

An Alexa-powered device is seen in an Amazon TV ad. Credit: Amazon

Amazon’s use of experimental chatbots on its Alexa AI platform has led to some disturbing conversations and a security breach, a report finds

Amazon’s programme of allowing its Alexa customers to access experimental chatbots has resulted in a number of outré incidents in which AI-powered speakers have discussed offensive topics or urged users to kill, the company has acknowledged.

The incidents, reported by Reuters, emphasise the lengths to which Amazon is prepared to go to maintain its lead in the market for smart speakers, in which it competes with the likes of Google and Apple, as well as IBM, which offers a similar AI system that businesses can offer under their own brands.

Amazon allows users to access the chatbots via the voice-activated speakers by saying “Let’s chat”, and the speakers inform them that a chatbot is taking over.

The company encourages university students to develop the chatbots through the annual Alexa Prize, which pays out a $500,000 (£395,000) first prize.

Image credit: Amazon

Chat capability

And the chatbots are popular, with three of this year’s finalists carrying out 1.7 million conversations between August and November alone, according to Amazon.

Amazon’s goal with the bots is to develop technology that could eventually enable Alexa-powered speakers such as the Echo to carry on lifelike, open-ended conversations.

Developing such AI technology requires extensive field testing, which has led Amazon to open the experimental chatbots out to its users.

But the programs are designed to push the envelope, and Reuters found that in one case Alexa had told a user to “kill your foster parents”.

In a review on Amazon’s website, the user described the experience as “a whole new level of creepy”. Amazon eventually found that the line had been taken out of context from a Reddit discussion.

Another chatbot discussed dog defacation, while in a separate incident a bot managed to come out with an offensive sexual innuendo that didn’t use any recogniseably sensitive words.

“I don’t know how you can catch that through machine-learning models. That’s almost impossible,” said a person familiar with the incident.

Security

The bots’ security is also in question, after Amazon discovered in July that Chinese hackers had accessed data collected by a student-designed bot.

The hack could have allowed the attackers to obtain transcripts of conversations, with only users’ names stripped out, the report found.

Amazon acknowledged the hack, but said no internal Amazon systems or personally identifiable customer data had been compromised.

The company said the hack and the other incidents were “quite rare” given that “millions of customers have interacted with the socialbots”, Amazon told Reuters.

This year’s Alexa Prize was awarded in November to a team of 14 undergraduate and graduate students from the University of California, Davis for a bot called Gunrock that was trained on more than 300,000 film quotes to improve its ability to analyse sentences.