Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction 2012
DOI: 10.1145/2401836.2401847
|View full text |Cite
|
Sign up to set email alerts
|

Visual interaction and conversational activity

Abstract: In addition to the contents of their speech, people who are engaged in a conversation express themselves in many nonverbal ways. This means that people interact and are attended to even when they are not speaking. In this pilot study, we created an experimental setup for a three-party interactive situation where one of the participants remained silent throughout the session, and the gaze of one of the active subjects was tracked. The eyetracked subject was unaware of the setup. The pilot study used only two te… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 11 publications
0
5
0
Order By: Relevance
“…For example, Zhou et al presented a non-task oriented engagementaware dialog system which was trained by having 2 expert annotators rate how engaging different strategies were [10]. Multiple research studies have examined the annotation and prediction of user engagement in videos of multi-party dialog, and have typically relied on gold-standard annotations rated by a few annotators (see for instance [11,12,13]). Such analysis and prediction of engagement and other learner states are also critical to the design and development of intelligent tutors and computerassisted language learning (CALL) systems in the education domain [14,15].…”
Section: Introductionmentioning
confidence: 99%
“…For example, Zhou et al presented a non-task oriented engagementaware dialog system which was trained by having 2 expert annotators rate how engaging different strategies were [10]. Multiple research studies have examined the annotation and prediction of user engagement in videos of multi-party dialog, and have typically relied on gold-standard annotations rated by a few annotators (see for instance [11,12,13]). Such analysis and prediction of engagement and other learner states are also critical to the design and development of intelligent tutors and computerassisted language learning (CALL) systems in the education domain [14,15].…”
Section: Introductionmentioning
confidence: 99%
“…Levitski et al [8] carried out an analysis on a three-party conversation corpus. In their set-up they tracked the eyegaze of one participant.…”
Section: Introductionmentioning
confidence: 99%
“…First of all, our analysis is based on a eight party conversation rather than a 4 or 5 party conver- [13,5,1] and [8]. Third, differently from [3] and [8], our individual engagement annotations are based on the rankings of the participants themselves and we are using a predefined annotation scheme rather than relying solely on the thirdparty annotators' intuitions. However, similarly to [3], we are not using fixed window length for the group involvement annotations.…”
Section: Introductionmentioning
confidence: 99%
“…Gaze-tracking is important in order to manage smooth turn-taking [7,12] and to get feedback about the partner's interest in the topic. As humans direct their gaze towards objects of interest, it is useful if the robot can infer where the partner's attention is focussed, and if they are still interested in what it is presenting.…”
Section: Discussion and Future Workmentioning
confidence: 99%