Alexa Urges User To ‘Kill Foster Parents’ In Unsettling Incident

Amazon’s programme of allowing its Alexa customers to access experimental chatbots has resulted in a number of outré incidents in which AI-powered speakers have discussed offensive topics or urged users to kill, the company has acknowledged.

The incidents, reported by Reuters, emphasise the lengths to which Amazon is prepared to go to maintain its lead in the market for smart speakers, in which it competes with the likes of Google and Apple, as well as IBM, which offers a similar AI system that businesses can offer under their own brands.

Amazon allows users to access the chatbots via the voice-activated speakers by saying “Let’s chat”, and the speakers inform them that a chatbot is taking over.

The company encourages university students to develop the chatbots through the annual Alexa Prize, which pays out a $500,000 (£395,000) first prize.

Image credit: Amazon

Chat capability

And the chatbots are popular, with three of this year’s finalists carrying out 1.7 million conversations between August and November alone, according to Amazon.

Amazon’s goal with the bots is to develop technology that could eventually enable Alexa-powered speakers such as the Echo to carry on lifelike, open-ended conversations.

Developing such AI technology requires extensive field testing, which has led Amazon to open the experimental chatbots out to its users.

But the programs are designed to push the envelope, and Reuters found that in one case Alexa had told a user to “kill your foster parents”.

In a review on Amazon’s website, the user described the experience as “a whole new level of creepy”. Amazon eventually found that the line had been taken out of context from a Reddit discussion.

Another chatbot discussed dog defacation, while in a separate incident a bot managed to come out with an offensive sexual innuendo that didn’t use any recogniseably sensitive words.

“I don’t know how you can catch that through machine-learning models. That’s almost impossible,” said a person familiar with the incident.

Security

The bots’ security is also in question, after Amazon discovered in July that Chinese hackers had accessed data collected by a student-designed bot.

The hack could have allowed the attackers to obtain transcripts of conversations, with only users’ names stripped out, the report found.

Amazon acknowledged the hack, but said no internal Amazon systems or personally identifiable customer data had been compromised.

The company said the hack and the other incidents were “quite rare” given that “millions of customers have interacted with the socialbots”, Amazon told Reuters.

This year’s Alexa Prize was awarded in November to a team of 14 undergraduate and graduate students from the University of California, Davis for a bot called Gunrock that was trained on more than 300,000 film quotes to improve its ability to analyse sentences.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Microsoft Launches Smallest AI Model, Phi-3-mini

Lightweight artificial intelligence model launched this week by Microsoft, offering more cost-effective option for Azure…

8 mins ago

US Senate Passes TikTok Ban Or Divestment Bill

ByteDance protest falls on deaf ears, as Senate passes TikTok ban or divest bill, with…

1 hour ago

Raimondo Downplays Huawei Smartphone Chip

US Commerce Secretary Gina Raimondo says Huawei's flagship smartphone chip 'years behind' US technology, shows…

1 day ago

Cloud Companies Reject Broadcom VMware Pricing Changes

Cloud companies, business user groups say Broadcom price changes do not address their concerns, as…

1 day ago

UK Lawsuit Claims Grindr Shared HIV Status

Dating app Grindr sued over claims it shared sensitive user data, including HIV status, with…

1 day ago

Meta Opens Quest VR OS To Third Party Gadget Makers

Meta Platforms opens operating system behind Quest virtual reality headsets to third parties amidst competition…

1 day ago