2018
DOI: 10.1007/978-3-030-01267-0_16
|View full text |Cite
|
Sign up to set email alerts
|

Visual Psychophysics for Making Face Recognition Algorithms More Explainable

Abstract: Scientific fields that are interested in faces have developed their own sets of concepts and procedures for understanding how a target model system (be it a person or algorithm) perceives a face under varying conditions. In computer vision, this has largely been in the form of dataset evaluation for recognition tasks where summary statistics are used to measure progress. While aggregate performance has continued to improve, understanding individual causes of failure has been difficult, as it is not always clea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
32
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
3
1

Relationship

1
9

Authors

Journals

citations
Cited by 33 publications
(32 citation statements)
references
References 71 publications
0
32
0
Order By: Relevance
“…In vision science, psychophysical experiments investigate the relationship between the intensity of a physical stimulus and human perception, by systematically varying the properties of the stimulus along one or more physical dimensions [72]. They are widely used in computer vision research to evaluate an algorithm's behaviour by measuring the exemplar by exemplar diculty and modeling the algorithm's pattern of errors over di erent levels of object visibility and saliency, making the algorithm's classi cation inference more explainable (see examples in [19,27,28,34,63]). In addition, we assessed how the drug type a ected the algorithm's classi cation con dence score.…”
Section: Evaluation Designmentioning
confidence: 99%
“…In vision science, psychophysical experiments investigate the relationship between the intensity of a physical stimulus and human perception, by systematically varying the properties of the stimulus along one or more physical dimensions [72]. They are widely used in computer vision research to evaluate an algorithm's behaviour by measuring the exemplar by exemplar diculty and modeling the algorithm's pattern of errors over di erent levels of object visibility and saliency, making the algorithm's classi cation inference more explainable (see examples in [19,27,28,34,63]). In addition, we assessed how the drug type a ected the algorithm's classi cation con dence score.…”
Section: Evaluation Designmentioning
confidence: 99%
“…Martínez-Plumed et al (2019) used IRT to analyze the performance of machine learning classifiers in a supervised learning task. IRT has also been used to evaluate machine translation systems (Otani et al, 2016) and speech synthesizers (Oliveira et al, 2020), and also in computer vision (RichardWebster et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…O'Toole et al [40] demonstrated that machines were never less accurate than humans on face images of various quality. RichardWebster et al [44] showed that observing human face recognition behavior in certain contexts can be used to retroactively explain why a face matcher succeeds or fails, leading to better model explainability. In the realm of biometrics, human saliency was found complementary to algorithm saliency and thus beneficial to combine them [38,49].…”
Section: Use Of Human Perception To Understand and Improvementioning
confidence: 99%