PRISMA-ScR fillable checklist_11Sept2019 (1).
Deepfakes are one of the most recent developments in misinformation technology and are capable of superimposing one person’s face onto another in video format. The potential of this technology to defame and cause harm is clear. However, despite the grave concerns expressed about deepfakes, these concerns are rarely accompanied with empirical evidence. We present a scoping review of the existing empirical studies that aim to investigate the effects of viewing deepfakes on people’s beliefs, memories, and behaviour. Five databases were searched, producing an initial sample of 2004 papers, from which 22 relevant papers were identified, varying in methodology and research methods used. Overall, we found that the early studies on this topic have often produced inconclusive findings regarding the existence of uniquely persuasive or convincing effects of deepfake exposure. Moreover, many experiments demonstrated poor methodology and did not include a non-deepfake comparator (e.g., text-based misinformation). We conclude that speculation and scare mongering about dystopian uses of deepfake technologies has far outpaced experimental research that assess these harms. We close by offering insights on how to conduct improved empirical work in this area.