2020
DOI: 10.12720/jait.11.3.143-148
|View full text |Cite
|
Sign up to set email alerts
|

Global Facial Recognition Using Gabor Wavelet, Support Vector Machines and 3D Face Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(6 citation statements)
references
References 12 publications
0
6
0
Order By: Relevance
“…Then, as shown in Table 3, FaceNet plays a role in the feature extraction stage, where additional features are extracted and normalized. The input for this extraction method was 3 channel (RGB) photos, and the output was 128-dimensional vectors [31]. With its 22 layers, FaceNet is able to accurately and efficiently extract features from facial images, and its output/features may be trained into 128-dimensional embeddings [32].…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Then, as shown in Table 3, FaceNet plays a role in the feature extraction stage, where additional features are extracted and normalized. The input for this extraction method was 3 channel (RGB) photos, and the output was 128-dimensional vectors [31]. With its 22 layers, FaceNet is able to accurately and efficiently extract features from facial images, and its output/features may be trained into 128-dimensional embeddings [32].…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…By using wavelet Gabor filter and SVM, the authors in [27] have successfully built a 3D facial recognition system. The highest accuracy that the proposed system achieved was 97.3%.…”
Section: Related Workmentioning
confidence: 99%
“…It has two major advantages as follows: the ability to generate nonlinear decision with linear classifiers and the ability of the kernel to allow classifiers applied to a no obvious dimensional vector space representation data. In modelling SVM a hyperplane is constructed to form decision boundary creating margin between positive and negative class [30,31].…”
Section: Support Vector Machinementioning
confidence: 99%
“…The decision boundary of the classifier is the boundary region classified as positive or negative. A classifier that has a linear decision boundary is referred to as a linear classifier otherwise if dependent on non-linear data is known as non-linear classifier [31,32] The type of kernel function, the degree of kernel function (d: polynomial kernel; γ: for the radial base kernel) and regularisation constant c forms the parameter of an SVM. To determine efficient parameters we used the approach proposed in [34].…”
Section: Support Vector Machinementioning
confidence: 99%