Startup companies, government agencies and academics are racing to combat so-called deepfakes, amid fears that doctored videos and photographs will be used to sow discord ahead of next year’s U.S. presidential election.
It is a difficult problem to solve because the technology needed to manipulate images is advancing rapidly and getting easier to use, according to experts. And the threat is spreading, as smartphones have made cameras ubiquitous and social media has turned individuals into broadcasters, leaving companies that run those platforms unsure how to handle the issue.
“While synthetically generated videos are still easily detectable by most humans, that window is closing rapidly. I’d predict we see visually undetectable deepfakes in less than 12 months,” said Jeffrey McGregor, chief executive officer of Truepic, a San Diego-based startup that is developing image-verification technology. “Society is going to start distrusting every piece of content they see.”
- Lawmakers Call for Tough Punishment for ‘Deepfakes’ (June 14, 2019)
- 2020 Campaigns Remain Vulnerable as Signs of Russian Hackers Re-Emerge (June 13, 2019)
- Deepfake Videos Are Getting Real and That’s a Problem (Oct. 15, 2018)
Truepic is working with Qualcomm Inc. —the biggest supplier of chips for mobile phones—to add its technology to the hardware of cellphones. The technology would automatically mark photos and videos when they are taken with data such as time and location, so that they can be verified later. Truepic also offers a free app consumers can use to take verified pictures on their smartphones.
The goal is to create a system similar to Twitter ’s method of verifying accounts, but for photos and videos, said Roy Azoulay, the founder and CEO of Serelay, a U.K.-based startup that is also developing ways to stamp images as authentic when they are taken.
When a photo or video is taken, Serelay can capture data such as where the camera was in relation to cellphone towers or GPS satellites. The company says it has partnerships with insurance companies that use the technology to help verify damage claims, though it declined to name the firms.
The U.S. Defense Department, meanwhile, is researching forensic technology that can be used to detect whether a photo or video was manipulated after it was made.
The idea behind the forensic approach is to look for inconsistencies in pictures and videos that serve as clues to whether the images have been manipulated—for example, inconsistent lighting, shadows and camera noise.
Sophisticated deepfakes call for other forensic strategies. Experts have found evidence of deepfakes by looking at inconsistencies in facial expressions and head movements. They then try to automate the process so that a computer algorithm can detect such inconsistencies in pictures or videos.
The forensic method can be applied to decades-old photos and videos, as well as those taken more recently with smartphones or digital cameras. The point-of-capture method, by comparison, only works with images taken with the technology.
Both strategies are necessary to tackle the deepfake problem, said Matt Turek, who runs the media forensics program in the Defense Department’s Defense Advanced Research Projects Agency, or Darpa.
“I don’t think there’s one silver bullet algorithm or even technical solution. There probably needs to be a holistic approach,” he said.
Deepfakes are becoming more difficult to detect as the technology used to create them advances, said Hany Farid, a computer science professor at the University of California, Berkeley, who has a financial stake in Truepic and whose research on media forensics has been funded by Darpa.
People who create deepfakes are constantly adapting to attempts to detect the manipulations, said Mr. Farid. Some combine the work of two different computer systems, one of which alters the images while the other tries to determine if it can be distinguished from authentic content, he said.
The stakes are high. In extreme cases, Mr. Farid said, a deepfake could trigger a military conflict or other real-life turmoil. “A fake video of Jeff Bezos secretly saying that Amazon’s profits are down leads to a massive stock manipulation,” he said, citing one possible scenario.
Share Your Thoughts
Do you think social-media platforms should be in charge of spotting deepfake videos or does the onus fall on the individual? Why or why not? Join the conversation below.
Mr. Farid said it is worrying that social-media companies aren’t doing more to combat deepfakes, particularly in the wake of Russian interference in the 2016 presidential election, which Moscow has denied.
“These platforms have been weaponized, and these aren’t hypothetical threats,” he said.
The House Intelligence Committee held a hearing last month focused on countering the threat from deepfakes. Members of Congress and experts suggested, among other things, holding social-media companies liable for harmful material disseminated over their platforms and putting warning labels on videos that can’t be verified.
Last month, an altered video of Facebook Inc. Chief Executive Mark Zuckerberg surfaced in which he appeared to question the company’s data practices. That followed Facebook’s refusal to remove a doctored video of House Speaker Nancy Pelosi in which she appeared to slur her words.
In the wake of those incidents, Mr. Zuckerberg told an audience at the Aspen Ideas Festival last month that Facebook is considering a policy on how to handle deepfakes.
“There is a good case that deepfakes are different from traditional misinformation,” he said, “just like spam is different from traditional misinformation and should be treated differently.”