2018
DOI: 10.1007/978-3-319-92108-2_21
|View full text |Cite
|
Sign up to set email alerts
|

An Open-Source Dialog System with Real-Time Engagement Tracking for Job Interview Training Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
27
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 21 publications
(27 citation statements)
references
References 12 publications
0
27
0
Order By: Relevance
“…location, trajectory, distance to the robot) [18,26,33], eye-gaze behaviors (e.g. looking at the agent, mutual gaze) [18,22,[34][35][36], facial information (e.g. facial movement, expression, head pose) [34,36], conversational behaviors (e.g.…”
Section: B) Engagement Recognitionmentioning
confidence: 99%
See 2 more Smart Citations
“…location, trajectory, distance to the robot) [18,26,33], eye-gaze behaviors (e.g. looking at the agent, mutual gaze) [18,22,[34][35][36], facial information (e.g. facial movement, expression, head pose) [34,36], conversational behaviors (e.g.…”
Section: B) Engagement Recognitionmentioning
confidence: 99%
“…looking at the agent, mutual gaze) [18,22,[34][35][36], facial information (e.g. facial movement, expression, head pose) [34,36], conversational behaviors (e.g. voice activity, adjacency pair, backchannel, turn length) [18,35,37], laughing [38], and posture [39].…”
Section: B) Engagement Recognitionmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, by recognizing user engagement, the systems can control turntaking behaviors [12,13] and dialogue policies [14,15,16], and increase the quality of user experience through the dialogue. For input features of engagement recognition, we can exploit non-verbal multimodal behaviors such as eye-gaze [17,18,19,20,12,21,15], backchannels (e.g., "yeah") [19,21], laughing [22], head nodding [21], facial movement and direction [17,15], spatial location and distance [23,24,12], and conversational interaction features like adjacency pairs [19]. In addition, direct use of low-level signals such as acoustic and image features was explored [10,25,26,27].…”
Section: Introductionmentioning
confidence: 99%
“…In addition, direct use of low-level signals such as acoustic and image features was explored [10,25,26,27]. Although such recognition models were initially based on heuristic rules [9,28,23], recent approaches are based on machine learning techniques [10,12,21,29,26,15,27].…”
Section: Introductionmentioning
confidence: 99%