2019
DOI: 10.3758/s13414-019-01773-w
|View full text |Cite
|
Sign up to set email alerts
|

Can people detect errors in shadows and reflections?

Abstract: The increasing sophistication of photo-editing software means that even amateurs can create compelling doctored images. Yet recent research suggests that people’s ability to detect image manipulations is limited. Given the prevalence of manipulated images in the media, on social networking sites, and in other domains, the implications of mistaking a fake image as real, or vice versa, can be serious. In seven experiments, we tested whether people can make use of errors in shadows and reflections to determine wh… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
19
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6

Relationship

3
3

Authors

Journals

citations
Cited by 11 publications
(21 citation statements)
references
References 46 publications
(93 reference statements)
1
19
0
Order By: Relevance
“…Our results are at odds with the commonly held view in media forensics that ordinary people have extremely limited ability to detect media manipulations. Past work in the cognitive science of media forensics has demonstrated that people are not good at perceiving and reasoning about shadow, reflection, and other physical implausibility cues ( 9 12 ). On first glance, deepfakes and other algorithmically generated images of people (e.g., images generated by StyleGAN) look quite realistic ( 35 ).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Our results are at odds with the commonly held view in media forensics that ordinary people have extremely limited ability to detect media manipulations. Past work in the cognitive science of media forensics has demonstrated that people are not good at perceiving and reasoning about shadow, reflection, and other physical implausibility cues ( 9 12 ). On first glance, deepfakes and other algorithmically generated images of people (e.g., images generated by StyleGAN) look quite realistic ( 35 ).…”
Section: Discussionmentioning
confidence: 99%
“…Recent advances in training neural networks for computer vision reveal that algorithms are capable of surpassing the performance of human experts in some complex strategy games ( 5 , 6 ) and medical diagnoses ( 7 , 8 ), so we might expect algorithms to be similarly capable of outperforming people at deepfake detection. Indeed, such computational methods often surpass human performance in detecting physical implausibility cues ( 9 ), such as geometric inconsistencies of shadows, reflections, and distortions of perspective ( 10 12 ). Similarly, face recognition algorithms often outperform forensic examiners (who are significantly better than ordinary people) at identifying whether pairs of face images show the same or different people ( 13 ).…”
mentioning
confidence: 99%
“…Focusing only on the manipulated image trials, we examined how three factors—manipulation type, age, and gender—affect people’s ability to detect and locate manipulations. Although previous research has shown that performance can vary across different types of manipulation (Nightingale et al, 2017, 2019), the effect of age remains unknown. In addition, previous findings have been mixed in terms of the effect of gender on detecting and locating manipulations, and the possible interaction between gender and age is unknown, therefore we also include gender in the analysis.…”
Section: Resultsmentioning
confidence: 99%
“…These results suggest that it might be possible to design a training initiative to encourage greater use of the three main strategies that we found to be associated with improved discriminability without a larger bias to respond “manipulated”—(a) searching for photometric inconsistencies, (b) searching for cloning, and (c) using careful and applied attention. In addition, given the evidence on people’s limited ability to determine whether or not shadows are consistent with a single light source (Farid & Bravo, 2010; Nightingale et al, 2019), it might be useful to deter people from attempting to use this strategy when trying to decide if an image is real or fake.…”
Section: Resultsmentioning
confidence: 99%
“…Admittedly, in this digital age, photographs and other visual records can be more readily altered (e.g., Storm & Soares, in press;Van House, 2011). Interestingly, at least when looking at generic photographs of scenes (rather than personal ones), research participants are somewhat able to detect doctoring (e.g., Nightingale et al, 2017Nightingale et al, , 2019. However, that does not necessarily mean that their expectations regarding the authenticity of photographs from their personal past are called into question.…”
Section: Reconstruction Accounts and Their Limitationsmentioning
confidence: 99%