Fake through AI

From Competendo - Digital Toolbox
Jump to: navigation, search
Although AI is intended to create models of reality, it might also be used for the opposite – creating images which are only seemingly real. In the context of information disorder – disinformation or misinformation – such application has a huge negative impact on the infosphere: Using an existing set of pictures of real people, machines can artificially create real new pictures. In consequence, this opens new opportunities for fraud: Are humans still able to recognise whether they are interacting with another human or a machine?

Deep fakes

Computer-generated images superimposed on existing pictures and videos, named after the pseudonymous online account that popularized the technique. They use deep neural networks to examine the facial movements of one person. Then, they synthesize images of another person’s face making analogous movements. Doing so effectively creates a video of the target person appearing to do or say the things the source person did. The more images used to train a deep-fake algorithm, the more realistic the digital impersonation will be.

Researchers found in 2018 a way to reliably tell real videos from deep-fake videos: eye blinking, usually missing in deep fake videos because few images are available online showing one’s eyes closed (Li, Chang & Lyu, 2018). However, they also warn that their solution is not permanent as the algorithm keeps evolving.

Tools for educational practice

In particular, we find the following tools and illustrations useful for embedding in learning about AI, about the digital representation of reality, and about information disorder:


This photo editing tool for Android and iPhone can be used to alter one’s appearance, including their age and gender (Ewe, 2021).

This person does not exist

The one-off website, www.thispersondoesnotexist.com, features photos created with the newest AI technology (generative adversarial networks). Every time the site is refreshed, a real-looking artificial face appears. Former Uber software engineer Phillip Wang created the page to raise awareness about the potential damage this technology can cause, as it can be employed to create deep fakes to spread misinformation.

Human vs. AI test

Kazimierz Rajnerowicz created a test to see whether internet users are able to pick up on clues and recognize photos, artwork, music, and texts created by Artificial Intelligence.

Deep Reckonings

A series of explicitly-marked deep fake videos that imagine the most morally courageous versions of the most controversial public figures. Created by Stephanie Lepp

Many fake pictures and videos are produced with the help of generative adversarial networks (GAN). Two neural networks are combined in order to generate a more realistic result. While we present here the negative application of this technology, we must also mention that it is very useful in scientific contexts. Namely it has been used in astronomy to model the distribution of dark matter or in medicine to detect glaucomatous.


  • Illustration: Felix Kumpfe/Atelier Hurra
  • Li, Y.; Chang, M.; Lyu,S. (2018). In Ictu Oculi: Exposing AI Created Fake Videos by Detecting Eye Blinking. 2018 IEEE International Workshop on Information Forensics and Security (WIFS), 2018, pp. 1-7, https://doi.org/10.1109/WIFS.2018.8630787

Valentina Vivona

Researcher at the Osservatorio Balcani Caucaso Transeuropa (OBCT), a think tank focused on South-East Europe, Turkey and the Caucasus located in Trento (Italy).

handbook for Facilitators: Learning the Digital


This text was published in: M. Oberosler (ed.), E. Rapetti (ed.), N. Zimmermann (ed.), G. Pirker, I. Carvalho, G. Briz, V. Vivona (2021/22). Learning the Digital. Competendo Handbook for Facilitators.

Created in the frame of the project DIGIT-AL - Digital Transformation Adult Learning for Active Citizenship.





Learning the Digital

Digital-book-cover.png A Competendo Handbook

Read more