2023
DOI: 10.3390/jimaging9010018
|View full text |Cite
|
Sign up to set email alerts
|

Deepfakes Generation and Detection: A Short Survey

Abstract: Advancements in deep learning techniques and the availability of free, large databases have made it possible, even for non-technical people, to either manipulate or generate realistic facial samples for both benign and malicious purposes. DeepFakes refer to face multimedia content, which has been digitally altered or synthetically created using deep neural networks. The paper first outlines the readily available face editing apps and the vulnerability (or performance degradation) of face recognition systems un… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 36 publications
(11 citation statements)
references
References 168 publications
(120 reference statements)
0
6
0
Order By: Relevance
“…On the flip side, TTI systems can be used for malicious purposes. In the realm of misinformation and disinformation, players such as hyper-partisan media, authoritarian regimes, state disinformation actors, and cyber-criminals have been identified as potential malicious users [4,5,14]. "Information operations" [107] are broadly acknowledged as a malicious use case.…”
Section: Usersmentioning
confidence: 99%
See 4 more Smart Citations
“…On the flip side, TTI systems can be used for malicious purposes. In the realm of misinformation and disinformation, players such as hyper-partisan media, authoritarian regimes, state disinformation actors, and cyber-criminals have been identified as potential malicious users [4,5,14]. "Information operations" [107] are broadly acknowledged as a malicious use case.…”
Section: Usersmentioning
confidence: 99%
“…These risks are particularly acute for women and public figures, who face character assassination through fake news or deepfake pornographic content [57,106,121,172]. Moreover, the destabilising potential of generative AI, such as providing visual legitimacy to populist or nationalist conspiracies and fake news [5,29,100,171], should not be overlooked. It is crucial to recognise that while all media consumers are vulnerable to these harms, those with less societal power to contest falsehoods -people of colour, women, LGBTQ+ communities [121] -are particularly at risk.…”
Section: Affected Partiesmentioning
confidence: 99%
See 3 more Smart Citations