Deep fakes
Computer-generated images superimposed on existing pictures and videos, named after the pseudonymous online account that popularized the technique. They use deep neural networks to examine the facial movements of one person. Then, they synthesize images of another person’s face making analogous movements. Doing so effectively creates a video of the target person appearing to do or say the things the source person did. The more images used to train a deep-fake algorithm, the more realistic the digital impersonation will be.
Researchers found in 2018 a way to reliably tell real videos from deep-fake videos: eye blinking, usually missing in deep fake videos because few images are available online showing one’s eyes closed (Li, Chang & Lyu, 2018). However, they also warn that their solution is not permanent as the algorithm keeps evolving.
Video: How is disinfiormation evolving due to AI?
Christina Hellberg, journalist and fact checker (online)
Tools for educational practice
In particular, we find the following tools and illustrations useful for embedding in learning about AI, about the digital representation of reality, and about information disorder:
FaceApp
This photo editing tool for Android and iPhone can be used to alter one’s appearance, including their age and gender (Ewe, 2021).
This person does not exist
The one-off website, www.thispersondoesnotexist.com, features photos created with the newest AI technology (generative adversarial networks). Every time the site is refreshed, a real-looking artificial face appears. Former Uber software engineer Phillip Wang created the page to raise awareness about the potential damage this technology can cause, as it can be employed to create deep fakes to spread misinformation.
Human vs. AI test
Kazimierz Rajnerowicz created a test to see whether internet users are able to pick up on clues and recognize photos, artwork, music, and texts created by Artificial Intelligence.
Deep Reckonings
A series of explicitly-marked deep fake videos that imagine the most morally courageous versions of the most controversial public figures. Created by Stephanie Lepp
Many fake pictures and videos are produced with the help of generative adversarial networks (GAN). Two neural networks are combined in order to generate a more realistic result. While we present here the negative application of this technology, we must also mention that it is very useful in scientific contexts. Namely it has been used in astronomy to model the distribution of dark matter or in medicine to detect glaucomatous.
References
- Illustration: Felix Kumpfe/Atelier Hurra
- Ewe, K. (2021). Young Female Japanese Biker Turns Out To Be 50-Year-Old Man With FaceApp. Medium https://www.vice.com/en/article/y3g88m/viral-japanese-biker-transformation-woman-faceapp
- Li, Y.; Chang, M.; Lyu,S. (2018). In Ictu Oculi: Exposing AI Created Fake Videos by Detecting Eye Blinking. 2018 IEEE International Workshop on Information Forensics and Security (WIFS), 2018, pp. 1-7, https://doi.org/10.1109/WIFS.2018.8630787
Valentina Vivona
Researcher at the Osservatorio Balcani Caucaso Transeuropa (OBCT), a think tank focused on South-East Europe, Turkey and the Caucasus located in Trento (Italy).
handbook for Facilitators: Learning the Digital
This text was published in: M. Oberosler (ed.), E. Rapetti (ed.), N. Zimmermann (ed.), G. Pirker, I. Carvalho, G. Briz, V. Vivona (2021/22). Learning the Digital. Competendo Handbook for Facilitators.
Created in the frame of the project DIGIT-AL - Digital Transformation Adult Learning for Active Citizenship.