Tech ethics expert warns AI race will 'end in tragedy' if Washington doesn't act

Tristan Harris said AI could end in 'tragedy'

EXCLUSIVE: Artificial intelligence could pose an existential threat to humanity if left unchecked on its current path, leading technology ethicist Tristan Harris said this week.

Harris spoke with Fox News Digital on the sidelines of the Senate's inaugural AI Insight Forum this week. Lawmakers heard from tech giants like Elon Musk and Mark Zuckerberg, as well as union leaders Liz Shuler and Randi Weingarten and experts like Harris and others about where they think AI is headed and how to be best prepared for it.

"When Senator Schumer asked everyone, ‘Does the government need to get involved to regulate AI?’ Every single person including all the CEOs, Elon, Zuckerberg etc, raise their hand to say yes," Harris said.

WHAT IS ARTIFICIAL INTELLIGENCE?

Tristan Harris

Center for Humane Technology co-founder Tristan Harris spoke with Fox News Digital on the sidelines of the Senates AI Insight Forum

"I think that was an unprecedented and important step forward, not because I'm excited about government regulation, because I'm excited about having a future that's safe. Because if we don't have something that mitigates this race between AI companies, it will end in tragedy."

When asked whether it poses a threat to humanity, he unambiguously said "Yes."

Harris dismissed the argument that AI's future is unpredictable, and that concerns over stifling innovation should stave off regulatory talks. 

NEW AI-GENERATED COVID DRUG ENTERS PHASE I CLINICAL TRIALS: ‘EFFECTIVE AGAINST ALL VARIANTS’

"We can predict the future, because we know where that race is going, which is, it's not a race to cure cancer and to solve climate change and to invent materials that help us do things, it's a race to just release these new capabilities. Last year, you couldn't take three seconds of someone's voice and then speak in their voice to your bank or to your grandma. Now you can," Harris said.

Elon Musk arrives at U.S. Capitol for Senate AI forum

X (formerly Twitter) CEO Elon Musk leaves the U.S. Senate bipartisan Artificial Intelligence (AI) Insight Forum (MANDEL NGAN/AFP via Getty Images / Getty Images)

"Why are we releasing these things? Because as soon as one person releases those capabilities, the other companies also have to release more capabilities. So it's the race to release, which is the race to just drive risks in society.

"And the reason that I'm so confident that the current path — unless something changes — is going to end so badly, is because there's so many risks that come from racing to just release as fast as possible."

HERE'S WHAT GOP SEN. MIKE ROUNDS TOLD MUSK, ZUCKERBERG, OTHER EXPERTS AT CLOSED-DOOR SENATE AI FORUM

The tech expert added that it did make him "hopeful," however, that Wednesday's forum attendees appeared to be so united on the need for regulation — though he said the specific way to go about it had not been discussed in detail.

Leadership Conference on Civil & Human Rights President and CEO Maya Wiley (L) and Meta CEO Mark Zuckerberg attend the AI Insight Forum (Getty Images)

CLICK HERE TO GET THE FOX NEWS APP

"I think that what is required is unprecedented for this Congress and for human history," Harris said. "This is an unprecedented technology, which will require an unprecedented response, it's not going to be as simple as passing a law."

He added, "We didn't talk about specific regulatory agencies. Elon, surprisingly, did speak to the fact that . . . 99.99% of the time, he's very happy that the FAA exists. And he's happy that FDA exists, even though the FDA has problems. It's good to live in a world where you have some kind of referee that's trying to govern an otherwise very crazy race with a very dangerous technology."