9th IEEE International Workshop on Advanced Motion Control, 2006.
DOI: 10.1109/amc.2006.1631755
|View full text |Cite
|
Sign up to set email alerts
|

A spatial model of engagement for a social robot

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

3
113
0

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 154 publications
(116 citation statements)
references
References 12 publications
3
113
0
Order By: Relevance
“…Bayesian inference algorithms and Hidden Markov Models have also successfully been applied to modelling and for predicting spatial user information Govea (2007 Hanajima et al (2005). In Michalowski et al (2006) models are reviewed that describe social engagement based on the spatial relationships between a robot and a person, with emphasis on the movement of the person. Although the robot is not perceived as a human when encountering people, the hypothesis is that robot behavioural reactions with respect to motion should resemble human-human scenarios.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Bayesian inference algorithms and Hidden Markov Models have also successfully been applied to modelling and for predicting spatial user information Govea (2007 Hanajima et al (2005). In Michalowski et al (2006) models are reviewed that describe social engagement based on the spatial relationships between a robot and a person, with emphasis on the movement of the person. Although the robot is not perceived as a human when encountering people, the hypothesis is that robot behavioural reactions with respect to motion should resemble human-human scenarios.…”
Section: Related Workmentioning
confidence: 99%
“…Several sensors have been used, including 2D and 3D vision (Dornaika & Raducanu, 2008;Munoz-Salinas et al, 2005), thermal tracking (Cielniak et al, 2005) or range scans (Fod et al, 2002;Kirby et al, 2007;Rodgers et al, 2006;Xavier et al, 2005). Laser scans are typically used for person detection, whereas the combination with cameras also produces pose estimates Feil-Seifer & Mataric (2005); Michalowski et al (2006). Using face detection requires the person to always face the robot, and that person to be close enough to be able to obtain a sufficiently high resolution image of the face Kleinehagenbrock et al (2002), limiting the use in environments where people are moving and turning frequently.…”
Section: Related Workmentioning
confidence: 99%
“…In [6] the problem is relaxed by only inquiring if the person is looking at the system or not. In [13] detecting a frontal face at a suitable spatial location is enough to adjust the classification to a higher level of engagement. In other works such as it is not clear how they solve the task.…”
Section: Related Workmentioning
confidence: 99%
“…How using this information to plan an appropriate interaction initiation is still a challenging issue, although some work was proposed in the literature. Michalowski et al [6], for example, proposed an approach based on social space to categorise different stages of user engagement with a robot, such as present, attending, engaged and interacting. Peters proposed a perceptually-based theory of mind model for interaction initiation applied to virtual agents [7] and evaluated user perception of attention behaviours for interaction initiation in virtual environments [8].…”
Section: Introductionmentioning
confidence: 99%