2020
DOI: 10.1109/access.2020.3024116
|View full text |Cite
|
Sign up to set email alerts
|

Deep Convolutional Neural Networks for Unconstrained Ear Recognition

Abstract: This paper employs state-of-the-art Deep Convolutional Neural Networks (CNNs), namely AlexNet, VGGNet, Inception, ResNet and ResNeXt in a first experimental study of ear recognition on the unconstrained EarVN1.0 dataset. As the dataset size is still insufficient to train deep CNNs from scratch, we utilize transfer learning and propose different domain adaptation strategies. The experiments show that our networks, which are fine-tuned using custom-sized inputs determined specifically for each CNN architecture, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
29
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 51 publications
(37 citation statements)
references
References 59 publications
0
29
1
Order By: Relevance
“…Consequently, we addressed the problem by placing the images into a fixed-sized canvas determined specifically for each CNN architecture, where the aspect ratio of the original image was preserved. This proved to be a less violating and more effective procedure to achieve better results, as reported in [ 12 ]. Moreover, we utilized the layer-wise adaptive large batch optimization technique called LAMB [ 13 ], which has demonstrated better performance and convergence speed for training deep networks.…”
Section: Introductionmentioning
confidence: 73%
“…Consequently, we addressed the problem by placing the images into a fixed-sized canvas determined specifically for each CNN architecture, where the aspect ratio of the original image was preserved. This proved to be a less violating and more effective procedure to achieve better results, as reported in [ 12 ]. Moreover, we utilized the layer-wise adaptive large batch optimization technique called LAMB [ 13 ], which has demonstrated better performance and convergence speed for training deep networks.…”
Section: Introductionmentioning
confidence: 73%
“… [ 75 ] UERC ResNet KNN 74.6 - 6. [ 6 ] EarVn1.0 ResNetXt101 Softmax 93.45 - 7. [ 76 ] IITD-II and AMI CNN Softmax IITD-II 97.36 and AMI 96.99 - 8.…”
Section: Taxonomic Review On Ear Biometricmentioning
confidence: 99%
“…They major disadvantage is that they have employed existing methods for ear detection, and recognition and the work have limited novelty. In a recent study [ 6 ], the authors explored the use of deep learning models such as VGG, ResNetx, and Inception. They have employed various learning strategies such as feature extraction, fine-tuning, and ensemble learning.…”
Section: Taxonomic Review On Ear Biometricmentioning
confidence: 99%
See 1 more Smart Citation
“…We opted for a different preprocessing procedure and experimentally investigated an approach to preserve the aspect ratios of the CT images. This procedure has proved to be very effective and results in an improved overall performance ( Alshazly et al, 2020 ). Extensive experiments and analysis on the diagnostic accuracy using standard evaluation metrics were conducted against five standard CNN models.…”
Section: Introductionmentioning
confidence: 99%