2013
DOI: 10.3390/s130810519
|View full text |Cite
|
Sign up to set email alerts
|

I Feel You: The Design and Evaluation of a Domotic Affect-Sensitive Spoken Conversational Agent

Abstract: We describe the work on infusion of emotion into a limited-task autonomous spoken conversational agent situated in the domestic environment, using a need-inspired task-independent emotion model (NEMO). In order to demonstrate the generation of affect through the use of the model, we describe the work of integrating it with a natural-language mixed-initiative HiFi-control spoken conversational agent (SCA). NEMO and the host system communicate externally, removing the need for the Dialog Manager to be modified, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 29 publications
(26 reference statements)
0
5
0
Order By: Relevance
“…As such, rather than exploring and comparing the predictive accuracies of various machine learning algorithms, we focus on support vector machines (SVMs) as the primary tool for our feature-focused investigation. SVMs were chosen for their wide use and prior success in modeling human behavior (Rienks et al, 2006 ; Kapoor et al, 2007 ; Jayagopi et al, 2009 ).…”
Section: Design Of Prediction Model (Phase 2)mentioning
confidence: 99%
See 1 more Smart Citation
“…As such, rather than exploring and comparing the predictive accuracies of various machine learning algorithms, we focus on support vector machines (SVMs) as the primary tool for our feature-focused investigation. SVMs were chosen for their wide use and prior success in modeling human behavior (Rienks et al, 2006 ; Kapoor et al, 2007 ; Jayagopi et al, 2009 ).…”
Section: Design Of Prediction Model (Phase 2)mentioning
confidence: 99%
“…In similar work, Kaliouby and Robinson ( 2004 ) used a dynamic Bayesian network model to infer a person's mental state of agreement, disagreement, concentration, interest, or confusion by observing only facial expressions and head movements. Other research has tried to model cognitive states, like frustration (Kapoor et al, 2007 ), and social relations like influence and dominance among groups of people (Jayagopi et al, 2009 ; Pan et al, 2011 ). However, this article describes the first work toward computationally predicting the trusting behavior of an individual toward a social partner.…”
Section: Introductionmentioning
confidence: 99%
“…Prosody (intensity, vocal pitch, rhythm, rate of utterance) in speech plays a major role in expressing emotions and can be intentionally modified to communicate different feelings [43]. In this paper, we investigated the expressive synthetic voices cross which was perceived by different ethnic groups using two different features (duration and intensity).…”
Section: Discussionmentioning
confidence: 99%
“…One of the goals for computational emotion modeling is to enrich the architecture of intelligent systems with emotion mechanisms similar to those of humans, and thus endow them with the capacity to "have" emotions. In the context of AAL, some studies exist in this direction, e.g., in [16] authors describe need-inspired emotion model applied in a HiFi agent whose emotions are generated by evaluating the situation and comparing it to agent's different needs.…”
Section: General Emotional Processes Of Affective Systemsmentioning
confidence: 99%