Deepfakes are the latest moral panic, but the issues about consent, fake news, and political manipulation they raise are not new. They are also not issues that can be solved at a tech level.
A deepfake is essentially a video of something that didn’t happen, but made to look extremely realistic. That might sound like a basic case of ‘photoshopping’, but deepfakes go way beyond this. By training AI algorithms on vast libraries of photographs taken of famous people, the videos produced in this way are eerily real, and worryingly convincing.
As a result, plenty of analysts are worried that deepfakes might be used for political manipulation, or even to start World War 3.
Solving these problems is going to be hard, in part because they are an extension of problems that are already evident in the rise of fake news, faked videos, and misinformation campaigns.
What are deepfakes?
If you’ve never seen a deepfake, do a quick Google search for one, and watch the video. If this is your first time, you’re going to be pretty impressed, and possibly quite disturbed.
These videos are made by AIs. Deepfake authors collect a database – as large as possible – of photographs taken of a person, and then an AI is used to paste these on to a video using a technique known as generative adversarial networks. Because AIs are developing at a rapid rate, so is the sophistication of deepfakes.
It will come as no surprise to learn that deepfakes were developed first for porn, to produce videos with Hollywood stars’ faces over other (women’s) bodies. But since then, the technology has increasingly been used to produce political videos, and by Hollywood itself.
The threat of the technology is certainly real, but let’s get one thing out of the way first: if you are reading this and are worried that you might be the subject of a deepfake, you don’t need to worry (at least yet). The technology relied on millions of photographs of a person being publically available, and unless you are a celebrity that’s probably not the case. Regardless of your celebrity status, however, the best VPN services are a cost effective ($5-10 monthly) defensive measure to consider for all your internet-connected devices. It could keep your private photos from falling into the wrong hands. Jennifer Lawrence and dozens of other celebrities could have benefited from this increasingly popular technology.
There are a few problems raised by deepfakes, but none of them are new.
The most obvious is that deepfake technology allows the creation of convincing videos that can be used for political manipulation. This has, in fact, been the most widely publicized use of the tech to date, with deepfakes of Trump, Alexandra Ocasio Cortez, and other US politicians currently being shared widely. That’s concerning, but as Samantha Cole at Vice pointed out, carefully edited videos can achieve the same thing via technologies that have been around for decades.
An associated issue is one of consent, which is particularly relevant given the origin of deepfakes in porn. Companies (and our own governments) regularly collect vast amount of data on the average user, and even when this is done with explicit consent most people are not aware of just how many pictures of them are available online. This, in turn, raises deep questions about the ability of individuals to practice effective reputation management in today’s environment.
These problems have been pointed out by many analysts, but I would also suggest that there is another problem with deepfakes that has been somewhat overlooked: their impact on cybersecurity, and, in fact, security more generally. As we have previously pointed out, the dissemination of misinformation is a big threat to organizational cybersecurity, and others have noted that deepfakes are already having impacts on cybersecurity. Given the prevalence of phishing scams, it’s not hard to imagine that we will soon see deepfakes where a company CEO asks employees for their passwords or other critical pieces of information.
There have generally been two approaches suggested to solving the problems created by deepfakes: use tech to detect fake videos, or improve media literacy.
The tech solution is to try and detect deepfakes using the same kinds of AI that are used to make them. In April, the US Defense Advanced Research Projects Agency (DARPA)’s Media Forensics department awarded nonprofit research group SRI International three contracts for research into the best ways to automatically detect deepfakes. Researchers at the University at Albany also received funding from DARPA to study deepfakes. This team found that analyzing the blinks in videos could be one way to detect a deepfake from an unaltered video, simply because there are not that many photographs of celebrities blinking.
This is undoubtedly important research. But it also raises a question: even if a video can be detected as a fake, what then?
There are already plenty of widely-shared videos that use editing (and not deepfake technology) to disseminate misinformation. A deepfake might be more convincing, but if you believe in the message that is being presented anyway, you are not looking for signs that the video is a fake.
Because of this, another solution to deepfakes needs to be found. It involves, as others have pointed out, increasing media literacy among vast swathes of the population, so that they are able to spot ‘fake news’ when they see it. But how this is to be achieved is anyone’s guess.
It’s also worth pointing out, in conclusion, that deepfakes might also present an opportunity as well as a number of problems. Whilst giving political speeches is not likely to be one of the jobs that AI will eliminate, the obviously faked nature of a lot of deepfakes will – perhaps – give rise to a more general skepticism about the things we read and see online.
As others have noted, we should thank deepfakes for “making us realize once again that we can’t take everything we see and hear for granted. For creating a problem for us to solve, early on, before it becomes so big, and has influenced so many of us incorrectly, that it’s too late.”