Coursera co-founder Andrew Ng argues AI poses no ‘meaningful risk’ of human extinction: 'I don't get it'

Andrew Ng is globally recognized leader who has been involved in many AI projects

Stanford University adjunction professor and AI scientist Andrew Ng took to Twitter Monday to share his thoughts on rising concerns about whether AI poses a risk for human extinction. 

Ng, a globally recognized leader who has been involved in many AI projects and co-founded Coursera, said he did not share his peers’ concerns about AI. 

Andrew Ng

FILE: Andrew Ng who is the Founder and CEO of LandingAI and deep learning.ai talks about AI during a keynote session at the Amazon Re:MARS conference on robotics and artificial intelligence at the Aria Hotel in Las Vegas, Nevada on June 6, 2019.  (Mark RALSTON / AFP / Getty Images)

His comments on Twitter came after safe.ai released a statement that said: "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and war." 

Ng noted that many high-profile AI leaders signed the letter, including OpenAI CEO Sam Altman and Microsoft co-founder Bill Gates. 

QUALCOMM EXECUTIVE SAYS CLOUD WON'T BE ENOUGH TO POWER AI

"I have to admit, I don’t get it," Ng said. "I’m struggling to see how AI could pose any meaningful risk for our extinction." 

Ng conceded that AI does pose many risks like "bias, fairness, inaccurate outputs, job displacement, [and] concentration of power." 

Despite these, Ng believes AI’s net impact will "massively" contribute to society. 

"I don’t see how it can lead to human extinction," Ng said. 

robot hand holding atomic symbol

AI generates nuclear or atomic energy. The uses of Artificial Intelligence in the fields of nuclear sciences, applications, power, weapons, and safety. (iStock / iStock)

Ng pledged to dive deeper into this issue and appealed to his audience to share their thoughts on why AI may or may not pose a risk for human extinction. 

Fueled by the release of ChatGPT by San Francisco-based OpenAI, the rise in AI’s popularity has also stoked unease about its negative impact.

OPENAI HIRING CHIEF CONGRESSIONAL LOBBYIST

Earlier this year, tech CEOs signed a letter calling for a six-month pause on AI labs training powerful systems and warned that such technology threatens "human extinction." 

Conjecture CEO Connor Leahy, who was one of more than 2,000 experts and tech leaders who signed the leader, told Fox News Digital in April that "a small group of people are building AI systems at an irresponsible pace far beyond what we can keep up with, and it is only accelerating." 

Billionaire tech mogul Elon Musk has warned that the dangers of AI would have a calamitous impact on the existence of humanity in its entirety if not managed properly. 

CLICK HERE TO GET THE FOX BUSINESS APP

"AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production in the sense that it has the potential, however small one may regard that probability, but it is not trivial; it has the potential of civilizational destruction," Musk said. 

Fox News’ Bailee Hill contributed to this report.