1998
DOI: 10.6028/nist.ir.6264
|View full text |Cite
|
Sign up to set email alerts
|

The FERET evaluation methodology for face-recognition algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

7
745
4
2

Year Published

2003
2003
2020
2020

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 550 publications
(758 citation statements)
references
References 10 publications
(16 reference statements)
7
745
4
2
Order By: Relevance
“…The system is tested on a range of images, which are much more heterogeneous than normally used in reports of automatic recognition systems (Phillips et al, 2000;Zhao et al, 2003). Using this realistic range of superficial image characteristics, not normally noticeable until one sees them all together, as in Figure 3, there is immediate advantage for storing an average of learning images, over storing them all individually.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The system is tested on a range of images, which are much more heterogeneous than normally used in reports of automatic recognition systems (Phillips et al, 2000;Zhao et al, 2003). Using this realistic range of superficial image characteristics, not normally noticeable until one sees them all together, as in Figure 3, there is immediate advantage for storing an average of learning images, over storing them all individually.…”
Section: Discussionmentioning
confidence: 99%
“…However, it is a difficult problem to solve across a realistic variation in images. In the DARPA-sponsored FERET evaluation of face recognition systems (Phillips, Moon, Rizvi & Rauss, 2000), several algorithms performed well when matching two images of a face, taken in the same sitting, with the same camera, but varied expression. For example, recognition rates of 95% are reported for analyses based on Principal Components Analysis (Moghaddam, Nastar & Pentland, 1997; and see below) and on wavelet-based systems (Wiskott, Fellous, Kruger & von der Malsburg, 1997).…”
Section: Automatic Face Recognitionmentioning
confidence: 99%
“…Then, the classification method briefly described in Section 2 was applied, and the classification accuracy of each dataset was recorded. The face datasets that were tested are FERET (Phillips et al, 1998(Phillips et al, , 2000, ORL (Samaria & Harter, 1994), JAFFE (Lynos et al, 1998), the Indian Face Dataset (Jain & Mukherjee, 2002), Yale B (Georghiades, Belhumeur, & Kriegman, 2001), and Essex face dataset (Hond & Spacek, 1997;Spacek, 2002). The sizes and locations of the non-facial areas that were cut from the original images is described in Table 1, and the accuracy of automatic classification of these images are also specified in the table.…”
Section: Resultsmentioning
confidence: 99%
“…The primary method of assessing the efficacy of face recognition algorithms and comparing the performance of the different methods is by using pre-defined and publicly available face datasets such as FERET (Phillips et al, 1998(Phillips et al, , 2000, ORL (Samaria & Harter, 1994), JAFFE (Lynos et al, 1998), the Indian Face Dataset (Jain & Mukherjee, 2002), Yale B (Georghiades, Belhumeur, & Kriegman, 2001), and Essex face dataset (Hond & Spacek, 1997).…”
Section: Introductionmentioning
confidence: 99%
“…Two folders have been created for testing, each containing 20 test images. One of the folders contains gray-scale images and second constitutes color facial images from database color FERET (Phillips et al (2000(Phillips et al ( , 1998). Table 1 and Table 2 shows the mean values of measured quality metrics and time taken for gray-scale and color images respectively.…”
Section: Discussionmentioning
confidence: 99%