'Godfather of artificial intelligence' says AI is close to being smarter than us, could end humanity

Hinton, a former Google engineer, says he wants to 'blow the whistle' on the dangers of artificial intelligence

Geoffrey Hinton, the "godfather of artificial intelligence," who left his prestigious job at Google this week, is sounding the alarm on the dangers AI poses to humanity.

Hinton is a computer scientist who worked as a vice president and fellow at Google for more than a decade and is responsible for a 2012 tech breakthrough that serves as the foundation of current AIs like ChatGPT. In media interviews since leaving Google, he has said he felt compelled to "blow the whistle" on the technology he pioneered, fearing that it's becoming too powerful. 

"I'm just a scientist who suddenly realized that these things are getting smarter than us," Hinton told CNN in an interview Tuesday.

"I want to sort of blow the whistle and say, ‘We should worry seriously about how we stop these things getting control over us,’" he added.

FORMER GOOGLE CEO ERIC SCHMIDT CALLS CHATGPT ‘WATERSHED MOMENT’ FOR AI: ‘I DIDN’T BELIEVE THIS A YEAR AGO'

Computer scientist Geoffrey Hinton

Computer scientist Geoffrey Hinton poses at Google's Mountain View, Calif, headquarters on Wednesday, March 25, 2015. Computer scientists who helped build the foundations of today's artificial intelligence technology are warning of its dangers, but t (AP Photo/Noah Berger, File / Getty Images)

Hinton announced his resignation from Google on Monday in a statement to the New York Times. "It is hard to see how you can prevent the bad actors from using it for bad things," he said.

Hinton is not alone in his concerns. Shortly after the Microsoft-backed startup OpenAI released its latest AI model called GPT-4 in March, more than 1,000 researchers and technologists, including Elon Musk, signed a letter calling for a six-month pause on AI development because, they said, it poses "profound risks to society and humanity."

What makes AI technology potentially smarter than humans is the sheer volume of information that models like OpenAI's GPT-4 have access to, Hinton explained to the MIT Technology Review. The AI knows "hundreds of times more" than a single human can, and it may have a "much better learning algorithm" than humans do, making it more efficient at cognitive tasks.

Hinton argues that GPT-4 has demonstrated an ability to learn new things very quickly once trained by researchers. Where human beings need to take time to learn and share information with each other, AI systems can accomplish this instantaneously, which Hinton says creates a potential for these models to outsmart humans.

"It’s a completely different form of intelligence," he told the publication. "A new and better form of intelligence."

MICROSOFT EXECUTIVE WARNS AI WILL CAUSE ‘REAL DAMAGE’ IN THE WRONG HANDS

ChatGPT and Bard AI

Google Bard VS OpenAI ChatGPT displayed on Mobile with Openai and Google logo on screen seen in this photo illustration.  (Jonathan Raa/NurPhoto via Getty Images / Getty Images)

Hinton's major AI breakthrough ca,e while working with two graduate students in Toronto in 2012. The trio were able to successfully create an algorithm that could analyze photos and identify common elements, such as dogs and cars, according to the NYT.

The algorithm was primitive, compared to what current AIs like OpenAI's ChatGPT and Google's Bard AI are capable of. Google purchased the company Hinton started around the algorithm for $44 million shortly after the breakthrough.

One of the graduate students who worked on the project with Hinton, Ilya Sutskever, now works as OpenAI's chief scientist.

The danger of these programs, according to Hinton, lies with bad actors who could use them to spread misinformation to sway elections, or even conduct wars. Individual criminals, terrorist groups or even rogue nation-states could use AI technology. 

"Don’t think for a moment that Putin wouldn’t make hyper-intelligent robots with the goal of killing Ukrainians," Hinton told the MIT Technology Review. "He wouldn’t hesitate."

WHITE HOUSE ANNOUNCES PLAN FOR ‘RESPONSIBLE’ AI USE, VP HARRIS TO MEET WITH TECH EXECUTIVES

Hinton

Geoffrey Hinton worked on early AI development and made a major breakthrough in 2012, but he now says AI is too dangerous. (Getty Images / Getty Images)

To preempt misuse of AI, Hinton is calling for a global agreement similar to the 1997 Chemical Weapons Convention to establish international rules against weaponized AI. But even then, that chemical weapons compact did not stop what investigators found were likely Syrian attacks using chlorine gas and the nerve agent sarin against civilians in 2017 and 2018 during the nation's bloody civil war.

The White House on Thursday announced its own plan to promote the responsible use of AI models focused on government-funded research and partnership with tech companies. 

CLICK HERE TO GET THE FOX NEWS APP

President Biden is proposing $140 million in federal funding to launch seven new artificial intelligence research institutes. His plan would also establish a requirement for federal agencies to draft guidelines on safe government use of artificial intelligence and seek a commitment from top tech companies to participate in a public evaluation of artificial intelligence systems. 

"AI is one of the most powerful technologies of our time, but in order to seize the opportunities it presents, we must first mitigate its risks," a White House press release stated. "President Biden has been clear that when it comes to AI, we must place people and communities at the center by supporting responsible innovation that serves the public good, while protecting our society, security, and economy. Importantly, this means that companies have a fundamental responsibility to make sure their products are safe before they are deployed or made public."

FOX Business' Anders Hagstrom, Patrick Hauf and the Associated Press contributed to this report.