2017
DOI: 10.1007/s00146-017-0748-x
|View full text |Cite
|
Sign up to set email alerts
|

The synthetization of human voices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(16 citation statements)
references
References 8 publications
0
14
0
Order By: Relevance
“…Notwithstanding the above, companies can cause massive job losses due to AI implementation (Frey and Osborne 2013), conduct unmonitored forms of AI experiments on society without informed consent (Kramer, Guillory, and Hancock 2014), suffer from data breaches (Schneier 2018), use unfair, biased algorithms (Eubanks 2018), provide unsafe AI products (Sitawarin et al 2018), use trade secrets to disguise harmful or flawed AI functionalities ), rush to integrate and put immature AI applications on the market and many more. Furthermore, criminal or black-hat hackers can use AI to tailor cyberattacks, steal information, attack IT infrastructures, rig elections, spread misinformation for example through deepfakes, use voice synthesis technologies for fraud or social engineering (Bendel 2017), or disclose personal traits that are actually secret or private via machine learning applications (Kosinski and Wang 2018;Kosinski, Stillwell, and Graepel 2013;Kosinski et al 2015). All in all, only a very small number of papers is published about the misuse of AI systems, even though they impressively show what massive damage can be done with those systems (Brundage et al 2018;King et al 2019; O'Neil 2016).…”
Section: Ai In Practice 31 Business Versus Ethicsmentioning
confidence: 99%
“…Notwithstanding the above, companies can cause massive job losses due to AI implementation (Frey and Osborne 2013), conduct unmonitored forms of AI experiments on society without informed consent (Kramer, Guillory, and Hancock 2014), suffer from data breaches (Schneier 2018), use unfair, biased algorithms (Eubanks 2018), provide unsafe AI products (Sitawarin et al 2018), use trade secrets to disguise harmful or flawed AI functionalities ), rush to integrate and put immature AI applications on the market and many more. Furthermore, criminal or black-hat hackers can use AI to tailor cyberattacks, steal information, attack IT infrastructures, rig elections, spread misinformation for example through deepfakes, use voice synthesis technologies for fraud or social engineering (Bendel 2017), or disclose personal traits that are actually secret or private via machine learning applications (Kosinski and Wang 2018;Kosinski, Stillwell, and Graepel 2013;Kosinski et al 2015). All in all, only a very small number of papers is published about the misuse of AI systems, even though they impressively show what massive damage can be done with those systems (Brundage et al 2018;King et al 2019; O'Neil 2016).…”
Section: Ai In Practice 31 Business Versus Ethicsmentioning
confidence: 99%
“…In the overall view, machine learning technologies make it possible to automatically create any media, be it images (Karras et al 2017), videos (Thies et al 2016), audio recordings (Bendel 2017) or texts (Radford et al 2019a). The quality of the media created is constantly improving, so that previously accepted principles, such as "seeing is Amount of monetary costs to acquire parƟcular sets of hard-or soŌware/code equipped with or without comments/code equipped with or without hyperparameters/code is compiled or raw/details about the types of hardware being used believing" or "hearing is believing", have to be abandoned.…”
Section: Synthetic Mediamentioning
confidence: 99%
“…The AI is utilized to sharpen techniques to commit traditional cybercrimes such as financial fraud, cyberterrorism, cyberextortion, etc. For example, when hackers try to voice phishing, the hackers can deceive victims by using the realistically imitated voices of the victims' family or friends [23].…”
Section: A Ai Security Threats and Crimementioning
confidence: 99%