“…Notwithstanding the above, companies can cause massive job losses due to AI implementation (Frey and Osborne 2013), conduct unmonitored forms of AI experiments on society without informed consent (Kramer, Guillory, and Hancock 2014), suffer from data breaches (Schneier 2018), use unfair, biased algorithms (Eubanks 2018), provide unsafe AI products (Sitawarin et al 2018), use trade secrets to disguise harmful or flawed AI functionalities ), rush to integrate and put immature AI applications on the market and many more. Furthermore, criminal or black-hat hackers can use AI to tailor cyberattacks, steal information, attack IT infrastructures, rig elections, spread misinformation for example through deepfakes, use voice synthesis technologies for fraud or social engineering (Bendel 2017), or disclose personal traits that are actually secret or private via machine learning applications (Kosinski and Wang 2018;Kosinski, Stillwell, and Graepel 2013;Kosinski et al 2015). All in all, only a very small number of papers is published about the misuse of AI systems, even though they impressively show what massive damage can be done with those systems (Brundage et al 2018;King et al 2019; O'Neil 2016).…”