Digital Culture

Deepfakes – Fake News on Steroids?

By Lisa Amann & Nikola John
#deepfake #ai #fakenews
Illustration by Lisa Amann & Nikola John

We are by now well aware that photos can no longer be believed. Even with amateurish Photoshop skills, it is possible to alter images in such a way that they can convey a deceptively real impact. But what does it do to us when we can no longer even trust videos or even voices?

Deepfake video demonstration by Jordan Peele

We're entering an era in which our enemies can make anyone say anything at any point in time.

This quote from Jordan Peele's viral video shows how easy it is to put one's own words into another person's mouth. In this video you can see Barack Obama, who at first glance appears to be deceptively genuine in his efforts to draw attention to the threat posed by manipulated videos. After a few seconds Peeles' face is faded in and you are able to realize that this is a Deepfake video. The term has been circulating on the web for several years now, but what actually are Deepfakes?

What are Deepfakes?

According to Wikipedia, a Deepfake, also called Deep Fake, is a term used since about 2017 to describe the technique of using artificial intelligence to produce deceptively real looking images or videos that are not real. This technique is based on artificial neural networks that generate fake media largely autonomously. Deepfake is a portmanteau word, where deep is associated with deep learning. To create Deepfake videos, a computer is "fed" with data (sometimes millions of images or video sequences) and can thus transfer the gestures of one face to another. Since this is artificial intelligence, an algorithm is created that learns by itself. Over time, this technology has developed to such an extent that less and less data is required to generate a realistic and convincing "AI synthesized fake".

So far, so good. Some of you have probably seen some of these videos in this context. You can certainly call it very entertaining when suddenly Jennifer Lawrence looks like Steve Buscemi and answers questions at the Golden Globe Awards, or when the face of Nicolas Cage beautifies the bodies of many Hollywood actresses. In the end, as normal consumers of the shallows of the Internet, we have only seen Deepfake videos in the context of simple entertainment, so why worry?

Originally, the technology of tracking faces on other people's bodies was intended to be used in the film and video game industry. One of the oldest examples is a scene from "Forrest Gump" in which the main character Forrest meets president John F. Kennedy. Here original footage of the president and the actor Tom Hanks was morphed. This makes the scene look as if the actor is actually interacting with the president. The most famous and also tragic example is probably Paul Walker, who died during the shooting of the Fast And Furious series, but with the help of the gestures and facial expressions of his brothers he was able to take his final role. What works with comparatively little data input today was an immense effort in post production and motion tracking at that time.

Unlike traditional motion tracking, which is used by the film industry, Deepfakes as we know them originally come from an account on Reddit. It found its beginning in 2017 and was distributed in form of pornographic footage of female celebrities. The term was actually coined by the Reddit account called "deepfake", which released the aforementioned recordings. Unfortunately, we don't know exactly where the technology came from. However, it was made available to the public in the form of a freely available app called "FakeApp" and has been downloaded 120,000 times since then.

The most well-known victim of this questionable procedure is Scarlett Johansson. However, she claimed in an interview that she does not draw any major consequences from appearing in a Deepfake pornography, since generally everyone knows that it is not her. She thinks it would be a useless effort, because the internet is a huge wormhole that is about to swallow itself.

In 2018, sites such as Reddit, Twitter and Pornhub banned any form of Deepfake pornography on their websites. In October 2019, Twitter went one step further and decided to stop using political ads worldwide. Speculatively, this reaction can also be attributed to the influence of fake news and to the fact that the second most usage of Deepfakes is for political purposes.

Due to the originally very questionable use of this technology, normal internet consumers ask themselves where dealing with deepfakes will lead us. They are usually only to be seen as visual fakes or in video format. However, as technologies are constantly evolving, not only images, or the image of people in videos, but also their voices are in danger.

Adobe Voco Demo

Adobe is currently developing an audio software called Adobe Voco, which allows you to create a voice profile of a person through data in the form of voice samples and let them say what their heart desires. Allegedly, only 20 minutes of voice input are needed to understand the intonation and the way of speaking of a specific person. This could become even more dangerous in the age of fake news. Although they want to develop the software as transparent as possible before it is released, abuse of this technology can never be ruled out.

The whole subject can seem very intimidating to a normal internet user. In times of Photoshop we are all aware that digital manipulation is no longer an impossibility. However, now that we have reached a point where you can't even trust a person's voice unless you have that person in front of you, it is only natural that you are becoming increasingly insecure about the reliability of online sources. Jeremy Kahn, an American mathematician, even went so far as to call Deepfakes fake news on steroids, which adds to the uncertainty.

Ultimately, it cannot and must not be denied that many technologies of course always have the potential to be used in a nasty way. That is unlikely to change. Nevertheless, the potential of technology should not be underestimated. We, as future designers, can ensure that it can be put into a useful context, but the main question is how?

Google Duplex assistant

Google set a good example when they published the Duplex assistant, which makes appointments for the user in restaurants or at the hairdresser. With the help of Deep Learning, their voice patterns could be used to create a personalized assistant that handles various tasks for the user. There is a wide range of ways in which the technology can be used, but the great danger is that it could fall into the wrong hands and cause pure harm, whether in the pornographic field or in the realm of con artists. Imagine the worst-case scenario, where manipulated voice recordings could trigger (nuclear) wars, which nobody really wants...

Although we are aware that Deepfakes are mostly used for questionable things and very simple humor, we wonder if there is a chance to use them for meaningful purposes. We believe that the entertainment industry could probably continue to benefit from it. Who wouldn't want to be part of a James Bond blockbuster?


In addition, there might also be the possibility of a digital twin to enable personalised use of digital environments. This could generate advantages in terms of anonymity on the Internet. 
Furthermore there are already approaches in which Deepfake technology is used in the health sector. One of these cases is the project Revoice, which was initiated by the ALS Association. Here, the voices of the persons suffering from ALS are archived and, with the help of Deepfakes, processed for later use, in order to give the relatives the opportunity to better process the loss of the sick person. But is this approach ethically justifiable? Isn't it also true that some things or people should also have the right to disappear? Perhaps this approach is also too philosophical for this article. But ultimately, we are able to imagine that the scope of applications of this technology are unlimited.

This article is not meant to scare you as readers, but we cannot give you a "recipe" of how to deal with this topic in the best way. As authors, we don't know exactly how we feel about Deepfakes, because depending on the context, it can make us as insecure as it does to you readers. The purpose of this article is to encourage lively discussions. That's why we would like to end with a quote from Henry Ajder, a Dutch Deepfake specialist who has made it his business to confront the Deepfakes with an "antivirus" (see DEEPTRACE).

Deepfakes do pose a risk to politics in terms of fake media appearing to be real, but right now the more tangible threat is how the idea of deepfakes can be invoked to make the real appear fake.

Henry Ajder

Questions or feedback? Get in touch and write us an email at feedback@digitalculture.info.

Digital Culture

Critical reflections on current developments in digital technologies and our role as designers to shape and sketch the future. With free choice of the topic, these texts have been written by students at HfG Schwäbisch Gmünd in the course "Digital Culture" under the supervision of Prof. Andreas Pollok.