2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication 2012
DOI: 10.1109/roman.2012.6343875
|View full text |Cite
|
Sign up to set email alerts
|

Learning the communication of intent prior to physical collaboration

Abstract: Abstract-When performing physical collaboration tasks, like packing a picnic basket together, humans communicate strongly and often subtly via multiple channels like gaze, speech, gestures, movement and posture. Understanding and participating in this communication enables us to predict a physical action rather than react to it, producing seamless collaboration. In this paper, we automatically learn key discriminative features that predict the intent to handover an object using machine learning techniques. We … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
40
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 46 publications
(40 citation statements)
references
References 17 publications
(16 reference statements)
0
40
0
Order By: Relevance
“…In our second study (Strabala, Lee, Dragan, Forlizzi, & Srinivasa, 2012), we observed 27 human pairs performing a task that required handovers. The participants were placed in a kitchen environment and tasked with putting away a bag of groceries and packing a picnic basket.…”
Section: Learning the Communication Of Intentmentioning
confidence: 99%
See 3 more Smart Citations
“…In our second study (Strabala, Lee, Dragan, Forlizzi, & Srinivasa, 2012), we observed 27 human pairs performing a task that required handovers. The participants were placed in a kitchen environment and tasked with putting away a bag of groceries and packing a picnic basket.…”
Section: Learning the Communication Of Intentmentioning
confidence: 99%
“…in Fig.2, first split is on > 0.25 vs. ≤ 0.25). A high value (closer to 1.0) indicates a higher match between the interaction and the feature (details in (Strabala et al, 2012)). The figure also shows, for each brach, the percentage of the total data for signals (green) and non-signals (red).…”
Section: Learning the Communication Of Intentmentioning
confidence: 99%
See 2 more Smart Citations
“…Their later work [41] used various pre-handover cues-such as proxemics, pose and gaze-to predict handover intent. In our work, we focused on the impact of robot gaze behaviors during the handover, after the handover intent has been clearly understood by both the giver and the receiver.…”
Section: Robot Gaze and Handover In Hrimentioning
confidence: 99%