Tech experts slam letter calling for AI pause that cited their research: 'Fearmongering'

The letter calling for an AI pause has more than 2,000 signatures, including Elon Musk's

Artificial intelligence experts who were cited in an open letter calling for a pause on AI research have distanced themselves from the letter and slammed it for "fearmongering."

"While there are a number of recommendations in the letter that we agree with (and proposed in our 2021 peer-reviewed paper known informally as ‘Stochastic Parrots’), such as ‘provenance and watermarking systems to help distinguish real from synthetic’ media, these are overshadowed by fearmongering and AI hype, which steers the discourse to the risks of imagined ‘powerful digital minds’ with ‘human-competitive intelligence,’" Timnit Gebru, Emily M. Bender, Angelina McMillan-Major and Margaret Mitchell wrote in a statement on Friday. 

The four tech experts were included in a citation in a letter published earlier this week calling for a minimum six-month pause on training powerful AI systems. The letter has racked up more than 2,000 signatures as of Saturday, including from Tesla and Twitter CEO Elon Musk and Apple co-founder Steve Wozniak. 

"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs," the letter begins. The open letter was published by the Future of Life Institute, a nonprofit that "works on reducing extreme risks from transformative technologies," according to its website. 

TECH CEO WARNS AI RISKS 'HUMAN EXTINCTION' AS EXPERTS RALLY BEHIND SIX-MONTH PAUSE

AI expert Timnit Gebru

Google AI Research Scientist Timnit Gebru speaks onstage during Day 3 of TechCrunch Disrupt SF 2018 at Moscone Center on September 7, 2018, in San Francisco, California. (Photo by Kimberly White/Getty Images for TechCrunch) (Kimberly White/Getty Images for TechCrunch / Getty Images)

Gebru, Bender, McMillan-Major and Mitchell’s peer reviewed research paper, "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" is cited as the first footnote on the letter’s opening line, but the researchers say the letter is spreading "AI hype."

"It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse which promises either a ‘flourishing’ or ‘potentially catastrophic’ future," the four wrote. "Such language that inflates the capabilities of automated systems and anthropomorphizes them, as we note in Stochastic Parrots, deceives people into thinking that there is a sentient being behind the synthetic media."

AI PAUSE GIVES 'BAD GUYS' TIME TO CATCH UP, BILL ACKMAN SAYS: 'I DON'T THINK WE HAVE A CHOICE'

Mitchell previously oversaw ethical AI research at Google and currently works as the chief ethical scientist at AI lab Hugging Face. She told Reuters that while the letter calls for a pause specifically on AI tech "more powerful than GPT-4," it is unclear which AI systems would even qualify as breaking those parameters. 

ChatGPT homescreen

The Welcome to ChatGPT lettering of the U.S. company OpenAI seen on a computer screen. (Silas Stein/picture alliance via Getty Images / Getty Images)

"By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of [Future of Life Institute]," she said. "Ignoring active harms right now is a privilege that some of us don’t have."

Another expert cited in the letter, Shiri Dori-Hacohen, a professor at the University of Connecticut, told Reuters that while she agrees with some of the points made in the letter, she disagrees with how her research was used. 

Dori-Hacohen co-authored a research paper last year, titled "Current and Near-Term AI as a Potential Existential Risk Factor," which argued that widespread use of AI already poses risks and could influence decisions on issues such as climate change and nuclear war, according to Reuters. 

TECH EXPERT GIVES AI WAKE-UP CALL: ‘WOLF’ IS HERE 

"AI does not need to reach human-level intelligence to exacerbate those risks," she said. 

"There are non-existential risks that are really, really important, but don’t receive the same kind of Hollywood-level attention."

OpenAI CEO Sam Altman on stage

Sam Altman, president of Y Combinator, speaks during the New Work Summit in Half Moon Bay, California, U.S., on Monday, Feb. 25, 2019. The event gathers powerful leaders to assess the opportunities and risks that are now emerging as artificial intell (David Paul Morris/Bloomberg via Getty Images / Getty Images)

I INTERVIEWED CHATGPT AS IF IT WAS A HUMAN; HERE'S WHAT IT HAD TO SAY THAT GAVE ME CHILLS

The letter argues that AI leaders should "develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts."

"In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems," the letter adds. 

Gebru, Bender, McMillan-Major and Mitchell argued that "it is indeed time to act" but that "the focus of our concern should not be imaginary ‘powerful digital minds.’ Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities."

Future of Life Institute president Max Tegmark told Reuters that "if we cite someone, it just means we claim they’re endorsing that sentence."

ELON MUSK, APPLE CO-FOUNDER, OTHER TECH EXPERTS CALL FOR PAUSE ON 'GIANT AI EXPERIMENTS': 'DANGEROUS RACE'

"It doesn’t mean they’re endorsing the letter, or we endorse everything they think," he said. 

He also shot down criticisms that Musk, who donated $10 million to Future of Life Institute in 2015 and serves as an external adviser, is allegedly trying to lead the charge on shutting down his competition.

Elon Musk

SpaceX owner and Tesla CEO Elon Musk smiles at the E3 gaming convention in Los Angeles, June 13, 2019. (REUTERS/Mike Blake/File Photo / Reuters Photos)

"It’s quite hilarious. I’ve seen people say, ‘Elon Musk is trying to slow down the competition,’" he said. "This is not about one company."

Tegmark said that Musk had no role in drafting the letter. 

When approached for comment, Future of Life Institute directed Fox News Digital to its frequently asked questions page regarding the letter, specifically on whether the letter means the nonprofit isn't "concerned about present harms."

"Absolutely not. The use of AI systems – of any capacity – create harms such as discrimination and bias, misinformation, the concentration of economic power, adverse impact on labor, weaponization, and environmental degradation," the section of its FAQ page reads. 

"We acknowledge and reaffirm these harms and are grateful to the work of many scholars, business leaders, regulators, and diplomats continually working to surface these harms at the national and international level," the page adds. 

CLICK HERE TO READ MORE ON FOX BUSINESS    

Another expert cited in the Future of Life Institute’s letter, Dan Hendrycks of the California-based Center for AI Safety, said he agrees with the contents in the letter, according to Reuters. He argued that it’s practical to take account of "black swan events," which are defined as appearing as unlikely to happen but would have dire consequences if they were to unfold, according to the outlet.