Abstract-This paper describes the realization of a natural speech dialogue for the robot head MEXI with focus on its emotion recognition. Specific for MEXI is that it can recognize emotions from natural speech and also produce natural speech output with emotional prosody. For recognizing emotions from the prosody of natural speech we use a fuzzy rule based approach. Since MEXI often communicates with well known persons but also with unknown humans, for instance at exhibitions, we realized a speaker-dependent mode as well as a speaker-independent mode in in the prosody based emotion recognition. A key point of our approach is that it automatically selects the most significant features from a set of twenty analyzed features based on a training data base of speech samples. This is important according to our results, since the set of significant features differs considerably between the distinguished emotions. With our approach we reached average recognition rates of 84% in speaker-dependent mode and 60% in speaker-independent mode.
Abstract-This paper describes the emotion recognition from natural speech as realized for the robot head MEXI. We use a fuzzy logic approach for analysis of prosody in natural speech. Since MEXI often communicates with well known persons but also with unknown humans, for instance at exhibitions, we realized a speaker dependent mode as well as a speaker independent mode in our prosody based emotion recognition. A key point of our approach is that it automatically selects the most significant features from a set of twenty analyzed features based on a training database of speech samples. This is important according to our results, since the set of significant features differs considerably between the distinguished emotions. With our approach we reach average recognition rates of 84% in speaker dependent mode and 60% in speaker independent mode.
In this paper, we compare users' interaction with the humanoid robot ASIMO and the dog-shaped robot AIBO. We conducted a user study in which the participants had to teach object names and simple commands and give feedback to either AIBO or ASIMO. We did not find significant differences in the users' evaluation of both robots and in the way commands were given to the two different robots. However, the way of giving positive and negative feedback differed significantly: We found that for the pet-robot AIBO users tend to give reward in a similar way as giving reward to a real dog by touching it and commenting on its performance by uttering feedback like "well done" or "that was right". For the humanoid ASIMO, users did not use touch as a reward and rather used personal expressions like "thank you" to give positive feedback to the robot.
This paper describes an experimental study in which we analyze how users give multimodal positive and negative feedback by speech, gesture and touch when teaching easy game-tasks to a pet robot. The tasks are designed to allow the robot to freely explore and provoke human reward behavior. By choosing game-based tasks, we ensure that the training can be carried out without stressing or boring the user. This way, we can observe natural, situated reward behavior.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.