Cheapfakes

Britt Paris, Joan Donovan
When most of the people are asked what they think of visual misinformation, most of them might talk about the deepfake media, i.e., visual content that seems authentic but in reality have been synthesized using powerful AI algorithms. The main motive behind deepfake media is to defame individuals e.g., a celebrities, politicians etc, or spread misinformation about politics, national security etc. However, majority of visual misinformation spread online through different social media platforms comprises of a simpler form of deception called, cheapfake media. Cheapfake media is a type of forged media produced using (cheap) non-AI techniques (without employing deep learning). Cheapfake media can be generated with or without modern multimedia editing tools which are non-AI based and are easily accessible for example, Adobe Photoshop, GIMP etc. Non-technical computer users can easily craft cheapfake visual content by recycling genuine (unedited) old photographs/videos from the internet and present them as evidence of a recent event by including out-of-context/false textual information with it. Cheapfake media is different from deepfake media, as deepfake media is generated using modern and sophisticated deep learning based techniques. One use of cheapfake media is to spread misinformation by deliberately altering the context in news captions i.e., by changing the associated image with genuine news, or by leaving the image as it is and changing the associated caption. For example, shortly after the 2015 earthquake in Nepal, an image with two children, a brother and sister went viral on the internet claiming to be captured in Nepal. The picture was actually captured in Vietnam in 2007. The content was not altered, but presented within wrong context.