Microsoft Apologises For Offensive Chatbot Tweets

Microsoft’s artificially intelligent bot began tweeting inappropriate messages and had to be deactivated within a day

Microsoft has apologised after an artificially intelligent chatbot it launched on Twitter last week began issuing racist and offensive messages.

The company activated Tay, a bot intended to mimic the speech patterns of a 19-year-old American girl, on Twitter last Wednesday, but was shut down about 16 hours later after users manipulated it into publishing offensive posts, Microsoft said.


‘Coordinated attack’

The company said the incident was due to an unspecified vulnerability in the AI that hadn’t been anticipated.

“Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay,” wrote Peter Lee, Microsoft’s vice president for research, in a statement. “Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack.

“As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time.”

The chatbot’s messages reportedly included white power slogans, anti-feminist messages, expressions of admiration for Hitler and other anti-Semitic statements.

Lee noted that the experiment with Tay was inspired directly by a similar bot that has been running on social media in China, including micro-blogging service Weibo, since late 2014.

That bot, called Xiaoice, has provided more than 40 million conversations without incident and has even presented the weather on television. Unlike Tay, Xiaoice doesn’t target a specific age group.

Chinese bot

“The great experience with XiaoIce led us to wonder: Would an AI like this be just as captivating in a radically different cultural environment?” Lee stated.

Microsoft said it plans to learn from the experiment and to develop an AI “that represents the best, not the worst, of humanity”.

“We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes,” Lee wrote.

Artificial intelligence is expected to feature prominently at Microsoft’s annual developer conference, Build, which takes place this week.

How much do you know about the cloud? Try our quiz!