Will Artificial Intelligence really become a threat to humanity?
Posted: 23 June 2017 | By Darcie Thompson-Fields
The highly contentious and arguably irresponsible comments from Alibaba founder Jack Ma around AI and its likelihood of creating a third World War – will have done little to inspire confidence in those that harbour fears around the subject of intelligent machines.
For some, the two words placed together spark a sense of dread, trepidation or even fear. For others, it represents the beginning of an exciting new digital world with untold benefits and opportunities.
Unfortunately, however, it’s often the former, which seems to seep more into people’s consciousness.
It’s perhaps then of little surprise that in a recent survey by the British Science Association (BSA) that 36% of respondents believe that AI will eventually takeover or destroy humanity.
A history of scaremongering
It’’s not difficult to see why. For years now, the subject of AI has largely experienced a love-hate relationship with the media and entertainment industry. This is largely thanks to sensationalist headlines and storytelling, which more-often-than-not represent AI in a negative light.
It doesn’t take long discussing the subject of AI before all the usual suspects get a mention; 2001 A Space Odyssey (1968), Blade Runner (1982), The Terminator (1984), iRobot (2004), Ex Machina (2011) et al. A generation of largely damning representations falling under the horror genre.
Little evidence to back-up ‘Robophobia’
It’s also worth noting that, whilst fears exist around AI, there is actually very little real life evidence to back up such anxieties of AI, in particular with robotics.
In fact, robots are technically responsible for only two deaths in the past 30 years – neither of which were directly associated with machine intelligence.
Robert Williams, a worker at a Ford Motor Company factory in Flat Rock, Michigan, holds his place in history as the first person, after accidentally being hit by an industrial robot arm on January 25, 1979. The family successfully sued for $15 million.
The second was Kenji Urada, a maintenance engineer at a Kawasaki Heavy Industries plant, who was killed in 1981 while working on a broken robot. The report stated he failed to turn it off completely, resulting in the robot pushing him into a grinding machine with its hydraulic arm. Ouch!
But it’s not just the Hollywood writers sending fears into humanity. The subject of autonomous weapons being developed in the military to do harm against humans, and super-computers being built to surpass the abilities to that of the human brain is one, which is causing much debate in the AI community.
Tesla Motors and SpaceX founder and CEO Elon Musk (pictured) has been particularly vocal of his concerns, describing AI as potentially the biggest threat to humanity, even once describing the potential threat as more dangerous than nuclear bombs.
Such are his concerns, Musk recently donated $10 million to the Future of Life Institute (FoLI) as part of a global research programme to ensure AI remains “beneficial to humanity,” and not run the risk of getting out of control.
“I think we should be very careful about artificial intelligence,” says Musk. “If I had to guess at what our biggest existential threat is, it’s probably that. So, we need to be very careful. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”
“The rise of powerful AI will be either the best or the worst thing ever to happen to humanity. We do not know which.” Stephen Hawking
Professor Stephen Hawking is another high profile person to express his anxieties around the importance of ensuring AI is governed appropriately.
“The rise of powerful AI will be either the best or the worst thing ever to happen to humanity. We do not know which,” said Hawking during the opening of a new AI lab at Cambridge University in 2016.
A report from Forrester stated: “Getting more sophisticated isn’t the same as being able to understand, and computers aren’t going to take over the world because they’ve
started out thinking humans or developed evil intentions. But the fear of machines unleashing a major destructive event isn’t as misplaced as it may seem, and Skynet-type scenarios [Terminator] could conceivably evolve if humans misjudge the extent to which they should trust software to make appropriate decisions.
“The greater the degree of unpredictability in an AI-powered system, the greater the likelihood that unforeseen negative outcomes will occur. That’s why humans – and their
ability to reason – will remain an essential part of the equation for the foreseeable future, possibly forever.”
American physicist Michio Kaku added: “No one knows when a robot will approach human intelligence, but I suspect it will be late in the 21st century. Will they be dangerous? Possibly. So, I suggest we put a chip in their brain to shut them off if they have murderous thoughts.”
“It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers.” Alan Turing
So,,is AI really all about robots that are one day being hell-bent on destroying mankind? Of course it’s not – at least not for a while yet anyway.
Artificial Super Intelligence (ASI), is the term to look out for when thinking of AI as something you can compare to what you’ve read in a book or seen on the big screen.
The general definition of ASI is a level of intelligence superior to any level of human intelligence and will (potentially), if allowed, be in complete control of its own decision making.
This form of AI is something which has been discussed by some of the world’s leading tech companies and world leaders as potentially having a detrimental impact on the human-race if, not governed correctly.
The idea of ASI is not new and has been discussed (loosely) since the definition of AI was first coined back in 1956.
Alan Turing, the ‘godfather of AI’, famously stated: “It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers… They would be able to converse with each other to sharpen their wits. At some stage therefore, we should have to expect the machines to take control.”
But when will this happen? Again, there is no concrete answer. Oxford University professor, philosopher and author Nick Bostrom, wrote in his latest book, ‘Superintelligence’, that, based on several expert surveys, this level of ‘human level intelligence’ could arrive between 2075 and 2090. Other reports suggest much later or even not at all.
Perhaps for now, it’s best to leave that one for the kids to worry about.
Industry is taking action
These concerns are not being ignored by the tech industry and measures are being taken.
In September of last year (2016) , US tech giants Microsoft, Facebook, Amazon, Google and IBM joined forces to ensure the opportunities and benefits of artificial intelligence are maximized in society and fears around safety are addressed.
The move saw the creation of a new non-profit organisation called ‘Partnership on Artificial Intelligence to Benefit People and Society.’
The main purpose is to collaborate on ways to advance public understanding of AI and its benefits, to research and publish best practices on the challenges and opportunities within the field and to tackle any ethical concerns over trustworthiness, reliability and robustness of the technology.
“A vital voice”
“Over the past five years, we’ve seen tremendous advances in the deployment of AI and cognitive computing technologies, ranging from useful consumer apps to transforming some of the world’s most complex industries, including healthcare, financial services, commerce and the Internet of Things,” commented IBM AI Ethics Researcher Francesca Rossi.
“This partnership will provide consumer and industrial users of cognitive systems a vital voice in the advancement of the defining technology of this century –one that will foster collaboration between people and machines to solve some of the world’s most enduring problems –in a way that is both trustworthy and beneficial.”
Microsoft research managing director Eric Horvitz, described the deal as a “historic collaboration,” and claimed early discussions already held, have already proved valuable.
“We’re excited about this historic collaboration on AI and its influences on people and society,” said Horvitz. “We see great value ahead with harnessing AI advances in numerous areas, including health, education, transportation, public welfare, and personal empowerment.
“This partnership will ensure we’re including the best and the brightest in this space in the conversation to improve customer trust and benefit society.”
Ralph Herbrich, Amazon Director
“We’re extremely pleased with how early discussions among colleagues blossomed into a promising long-term collaboration. Beyond folks in industry, we’re thrilled
to have other stakeholders at the table, including colleagues in ethics, law, policy, and the public at large. We look forward to working arm-in-arm on best practices and on such important topics as ethics, privacy, transparency, bias, inclusiveness, and safety.”
Amazon Director of Machine Learning Science and Core Machine Learning Ralf Herbrich, said he was “excited” about the opportunities the partnership will provide by bringing
together the industry’s leading personnel for the first time in such an environment.
“We’re in a golden age of Machine Learning and AI. As a scientific community, we are still a long way from being able to do things the way humans do things, but we’re solving unbelievably complex problems every day and making incredibly rapid progress.
“This partnership will ensure we’re including the best and the brightest in this space in the conversation to improve customer trust and benefit society. We are excited to work together in this partnership with thought leaders from both industry and academia.”
DeepMind/Google Co-Founder and Head of Applied AI Mustafa Suleyman (pictured), described the partnerships as a “huge step forward” for the industry.
“Google and DeepMind strongly support an open, collaborative process for developing AI. This group is a huge step forward, breaking down barriers for AI teams to share best practices, research ways to maximize societal benefits, and tackle ethical concerns.”