2015 International Conference on Affective Computing and Intelligent Interaction (ACII) 2015
DOI: 10.1109/acii.2015.7344610
|View full text |Cite
|
Sign up to set email alerts
|

Building autonomous sensitive artificial listeners (Extended abstract)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 27 publications
0
3
0
Order By: Relevance
“…Using this model, the ECA was perceived as more natural; it also created more rapport with its interlocutor during the interaction. Schröder et al (2015) developed a sensitive artificial listener that was able to produce backchannels. They developed a model that predicted when an ECA should display a backchannel and with which intention.…”
Section: State Of the Artmentioning
confidence: 99%
“…Using this model, the ECA was perceived as more natural; it also created more rapport with its interlocutor during the interaction. Schröder et al (2015) developed a sensitive artificial listener that was able to produce backchannels. They developed a model that predicted when an ECA should display a backchannel and with which intention.…”
Section: State Of the Artmentioning
confidence: 99%
“…Additionally, he points out that back-channels can carry different functions depending on their realization. Listener behavior, including vocal back-channels and head nods, has been explored in dyadic human-agent interaction (Gustafson et al, 2005;Maatman et al, 2005;Gratch et al, 2006;Sidner et al, 2006;Douglas-Cowie et al, 2008;Huang et al, 2011;Schroder et al, 2015) and to our knowledge, at least in one case, also for multi-party human-agent interaction (Wang et al, 2013). Douglas-Cowie et al (2008) used an artificial listening agent in a wizard-of-oz setting.…”
Section: Audio-visual Feedbackmentioning
confidence: 99%
“…They found this to be a successful method to collect emotionally colored multi-modal interaction data. Schroder et al (2015) could show that their artificial emotionally expressive listening system had a positive effect on user engagement in comparison to a non-expressive baseline system. They highlighted as one of their main contributions the uniqueness of their system in that it creates a loop between multimodal human-human analysis, interpretation and affective generation of non-verbal behavior in a human-agent conversational setting.…”
Section: Audio-visual Feedbackmentioning
confidence: 99%