2009 IEEE 12th International Conference on Computer Vision 2009
DOI: 10.1109/iccv.2009.5459250
|View full text |Cite
|
Sign up to set email alerts
|

Attribute and simile classifiers for face verification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

6
1,066
0
5

Year Published

2012
2012
2020
2020

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 1,215 publications
(1,077 citation statements)
references
References 23 publications
6
1,066
0
5
Order By: Relevance
“…Datasets We evaluate on three datasets: UT-Zap50K, as defined above, with concatenated GIST and color histogram features; the Outdoor Scene Recognition dataset [27] (OSR); and a subset of the Public Figures faces dataset [22] (PubFig). OSR contains 2,688 images (GIST features) with 6 attributes, while PubFig contains 772 images (GIST + Color features) with 11 attributes.…”
Section: Methodsmentioning
confidence: 99%
“…Datasets We evaluate on three datasets: UT-Zap50K, as defined above, with concatenated GIST and color histogram features; the Outdoor Scene Recognition dataset [27] (OSR); and a subset of the Public Figures faces dataset [22] (PubFig). OSR contains 2,688 images (GIST features) with 6 attributes, while PubFig contains 772 images (GIST + Color features) with 11 attributes.…”
Section: Methodsmentioning
confidence: 99%
“…There have been many works on classifying attributes directly from images, for facial verification [12], demography from face [10], gender estimation from face and body [18], describing faces and scenes [20], describing clothing [11] and describing texture [17]. A large portion of related research is also dedicated to recognising attributes from pedestrians and surveillance footage [29,1,21,13].…”
Section: Related Workmentioning
confidence: 99%
“…Some studies evaluated human performance in the task of visual concept classification. Kumar et al [12] as well as Lin et al [14] measured the human inter-coder agreement and compared experts against crowdsourcing annotators. Other papers reported the performance of humans and machine systems on some benchmark datasets.…”
Section: Human and Machine Performance In Visual And Auditory Recognimentioning
confidence: 99%
“…In the field of multimedia analysis and retrieval, human performance in recognition tasks was reported from time to time [2,9,12,13,15,16,[20][21][22][23]], but has not been evaluated in a consistent manner. As a consequence, the quality of human performance is not exactly known and estimates exist only for few recognition tasks.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation