Proceedings of the 2022 International Conference on Multimodal Interaction 2022
DOI: 10.1145/3536221.3558175
|View full text |Cite
|
Sign up to set email alerts
|

On the Horizon: Interactive and Compositional Deepfakes

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 18 publications
0
4
0
Order By: Relevance
“…If we do not develop effective ways to distinguish credible text from outright lies, we will likely lose faith in many things written on the Internet. This is related to the problem with the so-called deepfakes affecting images, audio, and video content, which at worst can lead to a “post-epistemic world where it is difficult or impossible to distinguish fact from fiction” (Horvitz, 2022 ).…”
Section: How the Usage Of Llms Will Affect Writingmentioning
confidence: 99%
“…If we do not develop effective ways to distinguish credible text from outright lies, we will likely lose faith in many things written on the Internet. This is related to the problem with the so-called deepfakes affecting images, audio, and video content, which at worst can lead to a “post-epistemic world where it is difficult or impossible to distinguish fact from fiction” (Horvitz, 2022 ).…”
Section: How the Usage Of Llms Will Affect Writingmentioning
confidence: 99%
“…Specifically, several works have identified that language models can be a secure, efficient, and effective means for producing content for disinformation operations (Radford et al, 2019;Buchanan et al, 2021;Bommasani et al, 2021, §5.2). Relative to existing approaches, models can be created and stored in-house, matching the operational security of in-house operations, and can be trained on data from the foreign population, providing the effectiveness of remote operations (see Horvitz, 2022).…”
Section: Disinformationmentioning
confidence: 99%
“…If they entrain to synthetic models they may interact with, does this have any impact on their own behavior? It is important for both researchers and developers of this technology to devise ways to mitigate these risks, such as those suggested in [Hor22].…”
Section: Broader Impactmentioning
confidence: 99%