Red Ice News

The Future is the Past

Facebook’s Artificial Intelligence Agents Creating their own Language is more Normal than People Think
New to Red Ice? Start Here!

Facebook’s Artificial Intelligence Agents Creating their own Language is more Normal than People Think

Source: independent.co.uk

Fears that computers were taking over swept the world this week when stories emerged about Facebook's AI creating its own language that researchers couldn't understand. But they might be a little misplaced.

But artificial intelligence experts have looked to calm worries that robots are becoming sentient or that we are living through the prelude to Terminator.

The messages might seem strange, they agree. But they are explicable and fairly normal in the world of artificial intelligence research.

Some of the discussion between the bots is seen below:

Bob: i can i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i . . . . . . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i everything else . . . . . . . . . . . . . .

Alice: balls have 0 to me to me to me to me to me to me to me to me to

Bob: you i i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

The messages didn't seem to be especially sinister. But the worrying nature of not being able to understand what an AI was saying or why it was saying it concerned many, and led to worries about such systems becoming sentient or conducting decisions without us being able to hold them accountable.

The story came after repeated warnings from many of the most respected minds in the world: people including Stephen Hawking have suggested that artificial intelligence could potentially bring about the end of humanity. Those predictions came to a head days before the story became popular as Elon Musk and Mark Zuckerberg argued about the dangers of AI – with Mr Zuckerberg saying that the danger had been overstated, after Mr Musk has repeatedly suggested that artificial intelligence could take over the world if it is not properly regulated and restrained.

But artificial intelligence researchers including those involved in the project have looked to calm those worries. 

The idea of a chatbot inventing its own language might sound terrifying, those behind the Facebook research say. But it is actually a long-running part of the way that AI works and is studied – sometimes being encouraged, and at other times happening by itself.

Similar things have been seen in AI work done by Google for its Translate tool and at OpenAI, for instance.

In the case of the recent Facebook study, it was entirely accidental. The agents were simply not told to ensure that they worked using language comprehensible to their human masters – and so didn't.

"While the idea of AI agents inventing their own language may sound alarming/unexpected to people outside the field, it is a well-established sub-field of AI, with publications dating back decades," Dhruv Batra, who worked on the project, wrote on Facebook.

In the case of Facebook's AI, the messages might be incomprehensible but their meaning can be worked out, at least a little. It has been compared to the kinds of shorthand that are developed in all communities of specialists – where words might come to mean specific things to people, but be completely mystifying to anyone who is outside of the group.

Mr Batra also took issue with the phrasing of "shutting down" the chatbots, and said that such a decision was commonplace. Many AI experts have become irritated because some stories said that researchers had panicked and pulled the plug – but in fact researchers just changed the AI, killing the job but simply altering some of the rules that it worked by.

"Analyzing the reward function and changing the parameters of an experiment is NOT the same as 'unplugging' or 'shutting down AI'," he wrote. "If that were the case, every AI researcher has been 'shutting down AI' every time they kill a job on a machine."

 

 

 

Comments

We're Hiring

We are looking for a professional video editor, animator and graphics expert that can join us full time to work on our video productions.

Apply

Help Out

Sign up for a membership to support Red Ice. If you want to help advance our efforts further, please:

Donate

Tips

Send us a news tip or a
Guest suggestion

Send Tip

Related News

Dallas Police Shooter Micah Xavier Johnson "Upset with White People & Wanted to Kill White People”
Dallas Police Shooter Micah Xavier Johnson "Upset with White People & Wanted to Kill White People”
Artificial Intelligence Poses Huge Threat to Humanity if in Wrong Hands, Scientist Warns
Artificial Intelligence Poses Huge Threat to Humanity if in Wrong Hands, Scientist Warns

Archives Pick

Red Ice T-Shirts

Red Ice Radio

3Fourteen

How Right Wing Is Japan Today?
Yoko Mada - How Right Wing Is Japan Today?
European Cooking & Philosophy
Elisabeth - European Cooking & Philosophy

TV

Muslim Vikings Exposed: The Money & Propaganda Behind the Lie
Muslim Vikings Exposed: The Money & Propaganda Behind the Lie
The Harvey Weinstein Scandal Is Why We Need to End the Patriarchy - Operation Reinhard
The Harvey Weinstein Scandal Is Why We Need to End the Patriarchy - Operation Reinhard

RSSYoutubeGoogle+iTunesSoundCloudStitcherTuneIn

Design by Henrik Palmgren © Red Ice Privacy Policy