news

Could AI really become a threat to humanity? The experts have their say

Posted: 27 April 2017 | By Darcie Thompson-Fields

The subject of AI and fear is the cause of some debate in the AI community and one in which appears to have no right or wrong answer.

The good news, for now at least, AI is in not yet at a super-intelligent level where the world could realistically see Arnold Schwarzenegger lifelike killer humanoids blending in unnoticed with the human-race.

Fact not fiction

But the subject is not one to simply dismiss as fiction. Far from it in fact. The subject of autonomous weapons being developed in the military to do harm against humans, and super-computers being built to surpass the abilities to that of the human brain is one which is causing much debate in the AI community.

Tesla Motors and SpaceX founder and CEO Elon Musk (pictured) has been particularly vocal of his concerns, describing AI as potentially the biggest threat to humanity, even once describing the potential threat as more dangerous than nuclear bombs.

Such are his concerns, Musk recently donated $10 million to the Future of Life Institute (FoLI) as part of a global research programme to ensure AI remains “beneficial to humanity,” and not run the risk of getting out of control.

“I think we should be very careful about artificial intelligence,” says Musk. “If I had to guess at what our biggest existential threat is, it’s probably that. So, we need to be very careful. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”

The best or worse thing in history

Professor Stephen Hawking is another high profile person to express his anxieties around the importance of ensuring AI is governed appropriately.

“The rise of powerful AI will be either the best or the worst thing ever to happen to humanity. We do not know which,” he said during an event in Cambridge, 2016.

American physicist Michio Kaku added: “No one knows when a robot will approach human intelligence, but I suspect it will be late in the 21st century. Will they be dangerous? Possibly. So, I suggest we put a chip in their brain to shut them off if they have murderous thoughts.”

But others have all but dismissed it all together – believing the probabilities around building an AI which will decided to do harm against humans as highly unlikely.

“Unforeseen risks”

Roboticist, AI author and entrepreneur Rodney Brooks said: “Will someone accidentally build a robot that takes over from us? And that’s sort of like this lone guy in the backyard, you know – ‘I accidentally built a 747.’
Market forecast experts at Forrester too share concerns around the potential future “unforeseen” risks that AI could bring.

The firm suggests it would be unwise to ignore these issues, adding its weight to the debate that humans must remain an “essential part of the equation”, when it comes to advancements in AI.

A report from Forrester stated: “Getting more sophisticated isn’t the same as being able to understand, and computers aren’t going to take over the world because they’ve started out thinking humans or developed evil intentions. But the fear of machines unleashing a major destructive event isn’t as misplaced as it may seem, and Skynet-type scenarios [Terminator] could conceivably evolve if humans misjudge the extent to which they should trust software to make appropriate decisions.

“The greater the degree of unpredictability in an AI-powered system, the greater the likelihood that unforeseen negative outcomes will occur. That’s why humans — and their ability to reason — will remain an essential part of the equation for the foreseeable future, possibly forever.”

This article is one of many found in our Introduction to AI report, which can be viewed for free here 

Related functions

, ,

Related topics

Related key players

,