2023
DOI: 10.18196/jrc.v4i1.16808
|View full text |Cite
|
Sign up to set email alerts
|

Smart Attendance System based on improved Facial Recognition

Abstract: Nowadays, the fourth industrial revolution has achieved significant advancement in high technology, in which artificial intelligence has had vigorous development. In practice, facial recognition is one most essential tasks in the field of computer vision with various potential applications from security and attendance system to intelligent services. In this paper, we propose an efficient deep learning approach to facial recognition. The paper utilizes the architecture of improved FaceNet model based on MobileN… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(12 citation statements)
references
References 43 publications
0
12
0
Order By: Relevance
“…The conclusion discusses the importance of staying current with technology trends in education and highlights the advantages of using facial recognition for attendance tracking, citing it as the most affordable and flexible option. Dang (2023) discusses the development of an efficient deep learning approach for facial recognition, particularly focusing on the architecture of an improved FaceNet model based on the MobileNetV2 backbone with SSD subsection. The proposed model [6] utilizes depth-wise separable convolution to reduce model size and computational volume while maintaining high accuracy and processing speed.…”
Section: Flow Chartmentioning
confidence: 99%
See 1 more Smart Citation
“…The conclusion discusses the importance of staying current with technology trends in education and highlights the advantages of using facial recognition for attendance tracking, citing it as the most affordable and flexible option. Dang (2023) discusses the development of an efficient deep learning approach for facial recognition, particularly focusing on the architecture of an improved FaceNet model based on the MobileNetV2 backbone with SSD subsection. The proposed model [6] utilizes depth-wise separable convolution to reduce model size and computational volume while maintaining high accuracy and processing speed.…”
Section: Flow Chartmentioning
confidence: 99%
“…Dang (2023) discusses the development of an efficient deep learning approach for facial recognition, particularly focusing on the architecture of an improved FaceNet model based on the MobileNetV2 backbone with SSD subsection. The proposed model [6] utilizes depth-wise separable convolution to reduce model size and computational volume while maintaining high accuracy and processing speed.…”
Section: Flow Chartmentioning
confidence: 99%
“…In Fig. 3, In addition to Depthwise Separable Convolutions, Linear bottlenecks and Inverted Residual Block (shortcut links between bottlenecks) are suggested for usage in MobileNetV2 [34]. Since the input and output of a block in a conventional residual architecture typically have more channels than the intermediary layers, MobileNet v2's residual block is the inverse of this design.…”
Section: B Mobilenetv2mentioning
confidence: 99%
“…Therefore, the authors devise a segmentation model using FCN as the decoder and MobilenetV2 as the encoder in order to solve the semantic segmentation problem [34,35] and achieve efficient scene comprehension for autonomous driving. This combination will allow to perform the required tasks using only embedded computers while still achieving real-time performance quality.…”
Section: Introductionmentioning
confidence: 99%
“…This is also a good model, but the SSD model faces difficulty in predicting small object and needs a lot of data to train. In our recent paper [16], the combination of SSD-MobileNet-v2 [17,18] and particle filter is proposed. This algorithm is applied in the case of target humans having different colors and full occlusion problems.…”
Section: Introductionmentioning
confidence: 99%