This was not the first time. In 2014, Musk made headlines by saying that scientists developing AI software are “summoning the demon” and that AI constitutes humanity’s biggest existential threat.
With his Axios interview, Musk doubled down, declaring that because AI will become vastly more intelligent than people, humans could end up like gorillas—relegated to small pockets of the planet and confined in zoos.
This sounds a little better than his earlier warnings that our AI overlords might go so far as to exterminate the human race, à la the Terminator, but it is still utter claptrap.
Musk gets attention for these kinds of outlandish statements because he is a successful and swashbuckling technology entrepreneur. But his expertise is in engineering, not computer science. Computer scientists who actually build and study AI have a completely different view.
Take AI scientist George Zarkadakis, who writes, “Do computers really ‘think’? Is ‘intelligence’ the same thing as ‘consciousness’? Is the brain a ‘computer’? Unfortunately, we do not seem to care enough about answering these sorts of questions properly nowadays. In our modern world of mass media and short attention spans, words are increasingly used as flashing slogans.”
He goes on to explain that computers and artificial intelligence supersede us in specific subsets of intelligence, but brute computing power can’t equal the whole spectrum of the human brain’s cognitive abilities.
And to the question of whether AI will exterminate us, Pedro Domingos, an AI researcher at the University of Washington and author of The Master Algorithm, is even more blunt. He writes, “The Terminator scenario, where a super-AI becomes sentient and subdues mankind with a robot army, has no chance of coming to pass.”
The problem with Musk’s self-promoting techno-babble is that many people might actually take it seriously, including some policymakers. And if AI really were coming for us, then there would only be one appropriate response—ban it worldwide, or at least heavily restrict it.
The last thing policymakers should be doing in the face of a doomsday scenario is proactively support AI by increasing funding for universities that develop it and other means. What legislator would want to be known as the godparent of the technology that destroyed the human race? Yet proactive support for AI is exactly what we need.
Make no mistake: AI promises enormous benefits to society. Already, AI is the secret sauce in the self-driving cars that Google and Tesla are testing. It’s in our smartphones, powering services such as Siri, Google Now, Alexa, and Cortana, which interpret our speech to give us timely answers to everyday questions.
Search engines like Google use artificial intelligence to generate search results and translate languages in real time. AI is being used for medical diagnoses. And these are only the beginning if we don’t give in to unwarranted paranoia.
It’s time to recognize Elon Musk for what he is: a great promoter of Elon Musk. He is not an expert on the future of artificial intelligence. This should be clear, because anyone who seriously thinks that we could be like Neo living in the Matrix (“maybe we’re in a simulation,” he suggested to Axios) is not someone who should be taken seriously about our technological future.
Robert D. Atkinson (@RobAtkinsonITIF) is president of the Information Technology and Innovation Foundation, the world’s leading think tank for science and technology policy.