2023
DOI: 10.1038/s41598-022-25268-1
|View full text |Cite
|
Sign up to set email alerts
|

Looking at faces in the wild

Abstract: Faces are key to everyday social interactions, but our understanding of social attention is based on experiments that present images of faces on computer screens. Advances in wearable eye-tracking devices now enable studies in unconstrained natural settings but this approach has been limited by manual coding of fixations. Here we introduce an automatic ‘dynamic region of interest’ approach that registers eye-fixations to bodies and faces seen while a participant moves through the environment. We show that just… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 84 publications
0
6
0
Order By: Relevance
“…Using recent mobile eye-tracking and automatic processing methods of data in the wild (Varela et al, 2023), we observed children's gaze in a natural and familiar context: They conversed with one of their parents at home, playing a weakly structured word-guessing game that afforded a spontaneous conversation. The results strongly supported the hypothesis: Children, including those in early middle childhood, did not show a noticeable difference compared to adults regarding how much they looked at the interlocutor's face.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Using recent mobile eye-tracking and automatic processing methods of data in the wild (Varela et al, 2023), we observed children's gaze in a natural and familiar context: They conversed with one of their parents at home, playing a weakly structured word-guessing game that afforded a spontaneous conversation. The results strongly supported the hypothesis: Children, including those in early middle childhood, did not show a noticeable difference compared to adults regarding how much they looked at the interlocutor's face.…”
Section: Discussionmentioning
confidence: 99%
“…Recent technological advances have facilitated the study of gaze in face-to-face interactions and across increasingly ecologically valid settings and contexts (Pfeiffer et al, 2013;Ho et al, 2015;Risko et al, 2016;Dalmaso et al, 2020;Hessels, 2020). These advances include mobile eye-tracking systems and, more recently, combining these systems with robust automatic detection of social content, namely faces, from videos recorded in unconstrained contexts (Deng, Guo, Ververas, Kotsia, & Zafeiriou, 2020;Varela, Towler, Kemp, & White, 2023).…”
Section: Measuring Children's Gaze In the Wildmentioning
confidence: 99%
“…Vision-language models trained on multi-modal input from naturalistic head-mounted camera footage have even been used to examine how infants' early-life perceptual experiences translate to their semantic knowledge of the world [67]. Similar approaches may hold potential for understanding whether single-face codes, capable of performing the entire range of face processing tasks, can be derived from naturalistic experience with faces [83,107].…”
Section: Discussionmentioning
confidence: 99%
“…Movies are one source of dynamic information, with the Deep Video Understanding [108] challenge points to a method for creating data sets containing visual and semantic information in videos. Vong et al [67] demonstrated collecting visual and semantic data from a first-person perspective (see also [83,107]). With advancements in generative AI, we can now also create realistic synthetic imagery to support the development and testing of face-processing models.…”
Section: Discussionmentioning
confidence: 99%
“…Even though faces routinely attract saccades during free viewing, observers consistently differ in the degree of this effect for static scenes (Broda & de Haas, 2022a;de Haas et al, 2019;Guy et al, 2019;Linka et al, 2022;Peterson & Eckstein, 2013), videos (Broda & de Haas, 2022b;Rubo & Gamer, 2018), or real-world interactions (Guy & Pertzov, 2023;Peterson et al, 2016;Rubo et al, 2020;Varela et al, 2023). In static scene viewing, these differences are evident from the first saccade after image onset, showing that individuals consistently differ in early face detection during free-viewing (Broda & de Haas, 2022b, 2022ade Haas et al, 2019;Linka & de Haas, 2020).…”
Section: Introductionmentioning
confidence: 93%