“…Kim et al [140], Le et al[155], Lee and Kang[156], Li et al[166], Liu et al[169], Lopez-Rincon[171], Maeda and Geshi[173], Nunes[194], Panya and Patel[199], Shi et al[225], Vithanawasam and Madhusanka[273], Wu et al[280], Zhang and Xiao[296], Zhang et al[169,297,298] BodyInthiam, Mowshowitz, and Hayashi[122], Nunes[194], Vithanawasam and Madhusanka [273], Wang et al [275] Speech Alonso-Martin et al [4], Anjum [8], Breazeal [29], Breazeal and Aryananda [30], Chastagnol [45], Chen et al [48], Devillers et al [62], Erol et al [78], Huang et al [118], Hyun et al [120], Kim et al [135], Kwon et al [151], Le and Lee [154], Li et al [166], Park et al [200], Park et al [203], Park and Sim [201], Rázuri et al [212], Song et al [228], Tahon et al [245], Zhu and Ahmad [302] Brain feedback Schaaff and Schultz [220], Su et al [240], Tsuchiya et al [266], Val-Calvo et al [268] Thermal imaging e.g., based on facial cutaneous temperature Abd et al [1] Biofeedback Kurono et al [149], Rani and Sarkar [210], Sugaya [241], Yang et al [288] Multimodal information Bien et al [26], Castillo et al [41], Cid et al [52], Keshari and Palaniswamy [134], Wu and Zheng [281], Yu and Tapus [292]Online audio-visual emotion recognition Kansizoglou et al[132] …”