“…Despite those technical solutions to detect synthetic media and approaches to educate humans on detecting machine manipulated media (Groh et al 2019), a further, quite strict idea is to limit the availability of trained generative models. Against this background, it is astounding how unquestioningly papers have been published in recent years, in which leap innovations in the generation of fake media, especially videos, are described-although many research groups, for instance, the one behind Face2Face, did not release their code (Fried et al 2019;Ovadya and Whittlestone 2019;Thies et al 2015Thies et al , 2016Thies et al , 2018Thies et al , 2019. Synthetic videos, no matter if they are generated through Face2Face, DeepFakes, FaceSwap or NeuralTextures, can have all sorts of negative consequences, from harm to individuals, national security, to the economy and democracy (Chesney and Citron 2018).…”