Facebook AI Experiment Shutdown Holds Lessons For IT Industry

Amazon Lex, AI

Facebook’s reasons for shutting down an AI-based chatbot were more mundane than it first appeared

If you saw the first reports about Facebook’s artificial intelligence chatbots, you might believe that the robot revolution was about to overthrow human civilization.

The reports said that the bots were talking among themselves using a language that humans could not understand. The word was that Facebook’s bots had slipped their leashes and were taking over.

Well, not exactly. While it is true that some chatbots created for AI experiments on automated negotiation had developed their own language, this wasn’t a surprise. In fact, it wasn’t even the first time that such a thing had happened. The fact that it might happen was explained in a blog entry on the Facebook Code pages.

The blog discussed how researchers were teaching an AI program how to negotiate by having two AI agents, one named Bob and the other Alice, negotiate with each other to divide a set of objects, which consisted a hats, books and balls. Each AI agent was assigned a value to each item, with the value not known to the other ‘bot. Then the chatbots were allowed to talk to each other to divide up the objects.

helpdesk, support BMC IT robot management © Shutterstock hektoR

Chatty bots

The goal of the negotiation was for each chatbot to accumulate the most points. While the ‘bots started out talking to each other in English, that quickly changed to a series of words that reflected meaning to the bots, but not to the humans doing the research. Here’s a typical exchange between the ‘bots, using English words but with different meaning:

Bob: “I can i i everything else.”

Alice responds: “Balls have zero to me to me to me to me to me to me to me to me to,”

The conversation continues with variations of the number of the times Bob said “i” and the number of times Alice said “to me” in the discussion.

The AI language emerged during a part of Facebook’s research where the AI agents practiced their negotiation skills with each other. There, the agents work on improving their skills by chatting with other agents. The researchers initially worked to have the agents simulate being human, specifically to avoid problems such as language creation.

“During reinforcement learning, the agent attempts to improve its parameters from conversations with another agent. While the other agent could be a human, FAIR (Facebook AI Research) used a fixed supervised model that was trained to imitate humans,” the researchers explained in their blog entry.

“The second model is fixed, because the researchers found that updating the parameters of both agents led to divergence from human language as the agents developed their own language for negotiating.”

It turns out that such ad hoc language development has happened with some regularly at Facebook, as well as in other research efforts. For example, Google’s Translate AI is reported to have quietly created an entire language to help it translate between different human languages.

The reason for this language development isn’t that the AI software is taking over, but rather that its priorities are set for it to perform with maximum efficiency. The ‘bots received points for efficiency, but no points were assigned by the researchers for sticking with English, so they didn’t. The researchers published a paper that details how this works, but it’s clear that the researchers could have added points for English if they’d so chosen.

Originally published on eWeek

Continues on Page 2…