In a story Sept. 13 about bogus videos, The Associated Press reported erroneously the name of a lawmaker who signed a letter seeking an intelligence assessment on technology that lets anyone make fake videos of real people saying things they never said.
The lawmaker is Rep. Adam Schiff, D-Calif., not Rep. Adam Smith, D-Calif.
A corrected version of the story is below:
Lawmakers want US intelligence assessment on fake videos
Three lawmakers want to see a U.S. intelligence assessment of the threat posed by technology that lets anyone make fake videos of real people saying things they never said
By DEB RIECHMANN
WASHINGTON (AP) — A bipartisan group of lawmakers on Thursday asked for a U.S. intelligence assessment of the threat posed by technology that lets anyone make fake, but realistic, videos of real people saying things they've never said.
The rising capabilities of the technology are fueling concerns it could be used to make a bogus video, for example, of an American politician accepting a bribe or of a U.S. or an adversarial foreign leader warning of an impending disaster.
Three lawmakers wrote a letter to National Intelligence Director Dan Coats asking his office to assess how these bogus, high-tech videos — known as deepfakes — could threaten U.S. national security.
"By blurring the line between fact and fiction, deepfake technology could undermine public trust in recorded images and videos as objective depictions of reality," wrote Reps. Adam Schiff, D-Calif., Stephanie Murphy, D-Fla., and Carlos Curbelo, R-Fla.
"We are deeply concerned that deepfake technology could soon be deployed by malicious foreign actors."
Deepfakes are not lip-syncing videos that are obvious spoofs. This technology uses facial mapping and artificial intelligence to produce videos that appear so genuine it's hard to spot the phonies. Republicans and Democrats predict this high-tech way of putting words in someone's mouth will become the latest weapon in disinformation wars against the United States and other Western democracies.
The lawmakers asked the intelligence agencies to submit a report to Congress by mid-December describing the threat and possible counter measures the U.S. can develop or employ to protect the nation.
Realizing the implications of the technology, the U.S. Defense Advanced Research Projects Agency is already two years into a four-year program to develop technologies that can detect fake images and videos. Right now it takes extensive analysis to separate phony videos from the real thing. It's unclear if new ways to weed out the fakes will keep pace with technology used to make them.
Deepfakes are so named because they utilize deep learning, a form of artificial intelligence. They are made by feeding a computer an algorithm, or set of instructions, lots of images and audio of a certain person. The computer program learns how to mimic the person's facial expressions, mannerisms, voice and inflections. If you have enough video and audio of someone, you can combine a fake video of the person with a fake audio and get them to say anything you want.