2019 International Research Conference on Smart Computing and Systems Engineering (SCSE) 2019
DOI: 10.23919/scse.2019.8842658
|View full text |Cite
|
Sign up to set email alerts
|

Face and Upper-Body Emotion Recognition Using Service Robot’s Eyes in a Domestic Environment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2020
2020
2025
2025

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 2 publications
0
2
0
Order By: Relevance
“…The facial emotion recognition system has a wide range of applications in the fields of smart home and medical care. Functionally, it needs to be able to accurately recognize the face or emotion to reflect the correct judgment output [8,[23][24][25][26][27][28][29][30][31][32][33][34], but in the above articles, there are no systems built by integrating LabVIEW and Python. In [35][36][37][38][39][40][41][42], different recognition applications and computing methods are mentioned, such as [42] proposes CNN architecture to segregate different plant images from the sequences collected.…”
Section: Introductionmentioning
confidence: 99%
“…The facial emotion recognition system has a wide range of applications in the fields of smart home and medical care. Functionally, it needs to be able to accurately recognize the face or emotion to reflect the correct judgment output [8,[23][24][25][26][27][28][29][30][31][32][33][34], but in the above articles, there are no systems built by integrating LabVIEW and Python. In [35][36][37][38][39][40][41][42], different recognition applications and computing methods are mentioned, such as [42] proposes CNN architecture to segregate different plant images from the sequences collected.…”
Section: Introductionmentioning
confidence: 99%
“…Kim et al [140], Le et al[155], Lee and Kang[156], Li et al[166], Liu et al[169], Lopez-Rincon[171], Maeda and Geshi[173], Nunes[194], Panya and Patel[199], Shi et al[225], Vithanawasam and Madhusanka[273], Wu et al[280], Zhang and Xiao[296], Zhang et al[169,297,298] BodyInthiam, Mowshowitz, and Hayashi[122], Nunes[194], Vithanawasam and Madhusanka [273], Wang et al [275] Speech Alonso-Martin et al [4], Anjum [8], Breazeal [29], Breazeal and Aryananda [30], Chastagnol [45], Chen et al [48], Devillers et al [62], Erol et al [78], Huang et al [118], Hyun et al [120], Kim et al [135], Kwon et al [151], Le and Lee [154], Li et al [166], Park et al [200], Park et al [203], Park and Sim [201], Rázuri et al [212], Song et al [228], Tahon et al [245], Zhu and Ahmad [302] Brain feedback Schaaff and Schultz [220], Su et al [240], Tsuchiya et al [266], Val-Calvo et al [268] Thermal imaging e.g., based on facial cutaneous temperature Abd et al [1] Biofeedback Kurono et al [149], Rani and Sarkar [210], Sugaya [241], Yang et al [288] Multimodal information Bien et al [26], Castillo et al [41], Cid et al [52], Keshari and Palaniswamy [134], Wu and Zheng [281], Yu and Tapus [292]Online audio-visual emotion recognition Kansizoglou et al[132] …”
mentioning
confidence: 99%