2021 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW) 2021
DOI: 10.1109/icce-tw52618.2021.9603134
|View full text |Cite
|
Sign up to set email alerts
|

Green Coffee Beans Classification Using Attention-Based Features and Knowledge Transfer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 3 publications
0
3
1
Order By: Relevance
“…Nevertheless, the ResNet18 in this experiment was not the optimal model, resulting in a low accuracy rate of the lightweight model. In contrast to Yang et al [19], the precision of LDCNN increased by 2.12%, Recall was raised by 0.36%, and F1-score gained by 1.74%. Finally, the LDCNN was placed on Raspberry Pi 4B to execute the green coffee bean quality detection system (see Table 6).…”
Section: Comparison Of Model Efficiency and Embedded Systemcontrasting
confidence: 84%
See 2 more Smart Citations
“…Nevertheless, the ResNet18 in this experiment was not the optimal model, resulting in a low accuracy rate of the lightweight model. In contrast to Yang et al [19], the precision of LDCNN increased by 2.12%, Recall was raised by 0.36%, and F1-score gained by 1.74%. Finally, the LDCNN was placed on Raspberry Pi 4B to execute the green coffee bean quality detection system (see Table 6).…”
Section: Comparison Of Model Efficiency and Embedded Systemcontrasting
confidence: 84%
“…The accuracy rate of the lightweight model reached up to 91% with parameters of 256,779. As illustrated in Table 5, Yang et al [19] put forward DSC, SAM, SpinalNet, and KD methods to train the model when the F1 score achieved 96.54%. Compared with LDCNN, the previous model [18] had higher accuracy, and lower parameters since the latter took ResNet18 as the teacher model for training.…”
Section: Comparison Of Model Efficiency and Embedded Systemmentioning
confidence: 99%
See 1 more Smart Citation
“…[3] The face is the most expressive and communicative part of a human being. [4][5] A face alignment is performed, and the aligned image crops the face to obtain the entire face area as input. In the training process of the macro-expression detection model, an attention mechanism is added to allow the model to focus only on the facial regions that are helpful for the emotion classification results and enhance the generalization ability of the model.…”
Section: Literature Surveymentioning
confidence: 99%