2022
DOI: 10.1007/s10639-022-11370-4
|View full text |Cite
|
Sign up to set email alerts
|

Facial emotion recognition of deaf and hard-of-hearing students for engagement detection using deep learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(7 citation statements)
references
References 29 publications
0
1
0
Order By: Relevance
“…When compared to other research, the proposed models are competitive, often outperforming or matching other state-of-the-art results. The model of Dada et al 53 , Lasri et al 55 , and Baygin et al 56 stand out with slightly higher metrics on JAFFE. However, it should be noted that the proposed models are trained and tuned using the pre-processing and hyperparameters specifically tailored for Emognition dataset, while the existing models were developed and trained for JAFFE and KDEF datasets.…”
Section: Resultsmentioning
confidence: 82%
“…When compared to other research, the proposed models are competitive, often outperforming or matching other state-of-the-art results. The model of Dada et al 53 , Lasri et al 55 , and Baygin et al 56 stand out with slightly higher metrics on JAFFE. However, it should be noted that the proposed models are trained and tuned using the pre-processing and hyperparameters specifically tailored for Emognition dataset, while the existing models were developed and trained for JAFFE and KDEF datasets.…”
Section: Resultsmentioning
confidence: 82%
“…Additionally, it can integrate some structure, like sequences that encode the order of mentioned concepts (part-of-speech tags, as in [211]) or visual objects [152]. The guiding inputs can encapsulate the user's interest in object relationships, as exemplified through the use of verbs and semantic roles to depict activities within the image and the involvement of objects in these activities [236]- [239]. Alternatively, using scene graphs generated or selected by users can also be used [237], [238].…”
Section: ) Controllable Captioningmentioning
confidence: 99%
“…The prevailing success of deep learning has made person reidentification no-exception. The earlier approaches based on deep learning, such as [3], [16]- [18], naively applied CNN backbones to extract global features. Due to the limitation of being prone to ignore local information from small regions [19], more and more works focus on learning local features.…”
Section: A Local Feature Learning For Person Re-identificationmentioning
confidence: 99%