2017
DOI: 10.1167/17.9.17
|View full text |Cite
|
Sign up to set email alerts
|

Comparing averaging limits for social cues over space and time

Abstract: Observers are able to extract summary statistics from groups of faces, such as their mean emotion or identity. This can be done for faces presented simultaneously and also from sequences of faces presented at a fixed location. Equivalent noise analysis, which estimates an observer's internal noise (the uncertainty in judging a single element) and effective sample size (ESS; the effective number of elements being used to judge the average), reveals what limits an observer's averaging performance. It has recentl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
6
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(12 citation statements)
references
References 36 publications
2
6
1
Order By: Relevance
“…An alternative hypothesis inspired by findings about perceptual decision making is that the visual system could compute spatial average promptly for each temporal frame and then estimate overall average across all temporal frames. This idea is consistent with psychophysical data that humans are capable of extracting ensembles www.nature.com/scientificreports www.nature.com/scientificreports/ with surprising speed and accuracy 13,42 even if they are composed of higher-order visual features including facial expressions 7 and head direction 37 . As for spatial average-orientation, Dakin (2001) used spatially distributed visual stimuli with durations of only 100 ms and obtained average-orientation discrimination thresholds similar to our results 13 .…”
Section: Discussionsupporting
confidence: 84%
See 1 more Smart Citation
“…An alternative hypothesis inspired by findings about perceptual decision making is that the visual system could compute spatial average promptly for each temporal frame and then estimate overall average across all temporal frames. This idea is consistent with psychophysical data that humans are capable of extracting ensembles www.nature.com/scientificreports www.nature.com/scientificreports/ with surprising speed and accuracy 13,42 even if they are composed of higher-order visual features including facial expressions 7 and head direction 37 . As for spatial average-orientation, Dakin (2001) used spatially distributed visual stimuli with durations of only 100 ms and obtained average-orientation discrimination thresholds similar to our results 13 .…”
Section: Discussionsupporting
confidence: 84%
“…Figure 2b shows a replot of the data as a function of temporal SD. We found that if temporal SD is small, discrimination thresholds increase as a function of spatial SD as has often been reported in previous studies: Threshold-vs-Noise (TvN) function 13,37 . If temporal SD is large, however, thresholds remain nearly constant.…”
Section: Proceduresupporting
confidence: 84%
“…Temporal integration of visual features into ensembles has been observed in many studies (Chong & Treisman 2003, Haberman, Harp & Whitney, 2009Albert & Scholl, 2010;Whiting & Oriet, 2011, Hubert-Wallander & Boynton 2015Oriet & Hozempa, 2016). Moreover, integration efficiency of temporally presented visual items can be even higher than, or at least equal to, integration efficiency of spatially presented items (Florey, Dakin & Mareschal, 2017;Gorea, Belkoura & Solomon, 2014). Our results with the FDL method are in seeming contrast with these studies.…”
Section: Discussioncontrasting
confidence: 79%
“…Previous studies of ensemble perception involve integrating features over time (Albrecht & Scholl, 2010;Joo, Shin, Chong, & Blake, 2009;Yamanashi Leib, Fischer, Liu, Qiu, Robertson, & Whitney, 2014), as well as that over space (Ariely, 2001;Chong & Treisman, 2003). According to two recent studies (Florey, Dakin, & Mareschal, 2017;Gorea, Belkoura, & Solomon, 2014), these two types of integration are similar in efficiency and may share common sampling mechanisms.…”
Section: Introductionmentioning
confidence: 99%