2018 International Workshop on Biometrics and Forensics (IWBF) 2018
DOI: 10.1109/iwbf.2018.8401568
|View full text |Cite
|
Sign up to set email alerts
|

Age and gender classification from ear images

Abstract: In this paper, we present a detailed analysis on extracting soft biometric traits, age and gender, from ear images. Although there have been a few previous work on gender classification using ear images, to the best of our knowledge, this study is the first work on age classification from ear images. In the study, we have utilized both geometric features and appearance-based features for ear representation. The utilized geometric features are based on eight anthropometric landmarks and consist of 14 distance m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
25
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 33 publications
(28 citation statements)
references
References 23 publications
(67 reference statements)
0
25
0
1
Order By: Relevance
“…According to the experiments on FERET dataset, the best result is obtained using ResNet-152 features with 5.50 mean absolute error (MAE). In [30], the authors present a study on age and gender classification from ear images. They employed both a geometric-based -distances between ear landmarks and area information-and an appearance-based representation -deep CNNs.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…According to the experiments on FERET dataset, the best result is obtained using ResNet-152 features with 5.50 mean absolute error (MAE). In [30], the authors present a study on age and gender classification from ear images. They employed both a geometric-based -distances between ear landmarks and area information-and an appearance-based representation -deep CNNs.…”
Section: Related Workmentioning
confidence: 99%
“…They employed both a geometric-based -distances between ear landmarks and area information-and an appearance-based representation -deep CNNs. The authors conducted their experiments on an internal dataset and found that appearancebased representation is more useful [30].…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations