2011
DOI: 10.1007/s11042-011-0744-y
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal object oriented user interfaces in mobile affective interaction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 31 publications
0
4
0
Order By: Relevance
“…Similarly, speech analysis can be used to identify emotions based on the tone, pitch, and other features of a person's voice [70]. Multi-modal affect recognition can combine many sources of evidence from multiple modalities, such as the keyboard, the microphone, the camera and other sources [71][72][73]. Emotion generation is the process of using artificial intelligence (AI) algorithms to create computer feedback that conveys empathy to the users.…”
Section: Artificial Intelligence and User Modelling In Service Of Use...mentioning
confidence: 99%
“…Similarly, speech analysis can be used to identify emotions based on the tone, pitch, and other features of a person's voice [70]. Multi-modal affect recognition can combine many sources of evidence from multiple modalities, such as the keyboard, the microphone, the camera and other sources [71][72][73]. Emotion generation is the process of using artificial intelligence (AI) algorithms to create computer feedback that conveys empathy to the users.…”
Section: Artificial Intelligence and User Modelling In Service Of Use...mentioning
confidence: 99%
“…As we mentioned above, the concept of multimodality has been around for quite a long time now and not just in theory. Multimodal systems have been deployed and tested in fields such as education [1,13,20,27,29,32,68], healthcare [6,7], marketing [5,30,38,42,58] or gaming [22,59].…”
Section: Multimodal Emotion Recognition Systemsmentioning
confidence: 99%
“…Alepis and Virbou [1] developed a proposal which is the closest to our proposal we have been able to find, although it is designed specifically for mobile phones and handheld devices.…”
Section: Multimodal Emotion Recognition Systemsmentioning
confidence: 99%
“…It is not surprising then that conventional uni-modal human-machine interactions lag in performance, robustness and naturalness when compared with human-human interactions. Recently, there has been increasing research interest in jointly processing information in multiple modalities and mimicking human-human multimodal interactions [2,4,5,9,13,14,16,18,19,21,22]. For example, human speech production and perception are bimodal in nature: visual cues have a broad influence on perceived auditory stimuli [17].…”
mentioning
confidence: 99%