2015
DOI: 10.3758/s13428-015-0641-9
|View full text |Cite
|
Sign up to set email alerts
|

The MR2: A multi-racial, mega-resolution database of facial stimuli

Abstract: Faces impart exhaustive information about their bearers, and are widely used as stimuli in psychological research. Yet many extant facial stimulus sets have substantially less detail than faces encountered in real life. In this paper, we describe a new database of facial stimuli, the Multi-Racial Mega-Resolution database (MR2). The MR2 includes 74 extremely high resolution images of European, African, and East Asian faces. This database provides a high-quality, diverse, naturalistic, and well-controlled facial… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
79
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
4
3
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 107 publications
(89 citation statements)
references
References 52 publications
(56 reference statements)
0
79
0
Order By: Relevance
“…In this study, two different stimulus sets were used: a) A set of 16 real face images, and 80 images of objects from nonface categories (fruits, bodies, gadgets, hands, and scrambled images) (Freiwald and Tsao, 2010;Ohayon et al, 2012;Tsao et al, 2006). b) A set of 2100 images of real faces from multiple face databases, FERET face database (Phillips et al, 2000;Phillips et al, 1998b), CVL face database (Solina et al, 2003), MR2 face database (Strohminger et al, 2016), PEAL face database (Gao et al, 2008), AR face database (Martinez and Benavente, 1998), Chicago face database (Ma et al, 2015) and CelebA database (Yang et al, 2015). 17 online photos of celebrities were also included.…”
Section: Behavioral Task and Visual Stimulimentioning
confidence: 99%
“…In this study, two different stimulus sets were used: a) A set of 16 real face images, and 80 images of objects from nonface categories (fruits, bodies, gadgets, hands, and scrambled images) (Freiwald and Tsao, 2010;Ohayon et al, 2012;Tsao et al, 2006). b) A set of 2100 images of real faces from multiple face databases, FERET face database (Phillips et al, 2000;Phillips et al, 1998b), CVL face database (Solina et al, 2003), MR2 face database (Strohminger et al, 2016), PEAL face database (Gao et al, 2008), AR face database (Martinez and Benavente, 1998), Chicago face database (Ma et al, 2015) and CelebA database (Yang et al, 2015). 17 online photos of celebrities were also included.…”
Section: Behavioral Task and Visual Stimulimentioning
confidence: 99%
“…The stimuli consisted of 16 grayscale photographic images of faces (Strohminger et al, 2016) and two colored cues (red and blue squares). Each of the colored cues indicates the task-relevant dimension of social hierarchy for the current trial.…”
Section: Stimulusmentioning
confidence: 99%
“…Study 1 was composed of two conditions, neuFace, in which objects were displayed as the 2 frequent stimuli and neutral faces as the oddball stimuli, presented either with or without 3 continuous flash suppression (e.g., "neuFace_noCFS"). Object and face images came from the 4 set made available by Brady, Konkle, Alvarez, and Oliva (2008) and the "MR2 Face Database" 5 (Strohminger et al, 2016), respectively. We selected 200 object images, excluding those that 6 suggested animacy (e.g., dolls, toy animals) or any with a face-like appearance.…”
Section: Experimental Procedure: Study 1mentioning
confidence: 99%