Why Facebook AI Shut Down that Went Rogue

Why Facebook AI Shut Down that Went Rogue

Facebook Ai Shut down abandoned an experiment after two artificially intelligent programs appeared to be chatting to each other in a strange language only they understood.

The two chatbots came to create their own changes to English that made it easier for them to work – but which remained mysterious to the humans that supposedly look after them.

The company chose to Facebook ai shut down the chats because “our interest was having bots who could talk to people”, researcher Mike Lewis told FastCo. (Researchers did not shut down the programs because they were afraid of the results or had panicked, as has been suggested elsewhere, but because they were looking for them to behave differently.)

The bizarre discussions came as Facebook challenged its chatbots to try and negotiate with each other over a trade, attempting to swap hats, balls and books, each of which were given a certain value. But they quickly broke down as the robots appeared to chant at each other in a language that they each understood but which appears mostly incomprehensible to humans.

The robots had been instructed to work out how to negotiate between themselves, and improve their bartering as they went along. But they were not told to use comprehensible English, allowing them to create their own “shorthand”, according to researchers.

In experimenting with language learning, a research algorithm ended up deviating from human language in a way that wasn’t really useful… it started generating what one might call “functional gibberish”. It was functional in that it continued to carry information, but it wasn’t very efficient or useful.

You could compare it to the weird routes that Google Maps generates sometimes… sure they might save thirty seconds of driving, but they involve ten turns down obscure side streets instead of three turns on main streets.

The result was intriguing because it showed the algorithm’s capacity for generating its own encoding scheme, and also showed what can happen with unconstrained feedback in an automated social language product.

The algorithm wasn’t “shut down” any more than myriad algorithms are shut down every time engineers change the algorithm. Most likely the algorithm was only running on a test dataset and was never “live” or interacting with real humans.

Could this idea, taken to its logical extreme, lead one day to software being “alive” and “conscious”? Maybe. It is certainly an intriguing possibility.

This question originally appeared on Quora – the place to gain and share knowledge, empowering people to learn from others and better understand the world. You can follow Quora on Twitter, Facebook, and Google+. More questions:

Leave a Reply

Your email address will not be published. Required fields are marked *

nineteen + 5 =

This site uses Akismet to reduce spam. Learn how your comment data is processed.