2013
DOI: 10.1088/1748-3182/8/3/035002
|View full text |Cite
|
Sign up to set email alerts
|

Contextual action recognition and target localization with an active allocation of attention on a humanoid robot

Abstract: Abstract. Exploratory gaze movements are fundamental for gathering the most relevant information regarding the partner during social interactions. Inspired by the cognitive mechanisms underlying human social behaviour, we have designed and implemented a system for dynamic attention allocation which is able to actively control gaze movements during a visual action recognition task exploiting its own action execution predictions. Our humanoid robot is able, during the observation of a partner's reaching movement… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
4
2
1

Relationship

3
4

Authors

Journals

citations
Cited by 25 publications
(16 citation statements)
references
References 32 publications
(50 reference statements)
0
16
0
Order By: Relevance
“…In the social domain, support for this view comes from a variety of sources, including studies of motor activation during action observation, or interference effects between observed and performed actions (Aglioti et al., 2008, Cross et al., 2006, Kilner et al., 2003, Umiltà et al., 2001), see Kilner and Lemon (2013) for a review. This sort of evidence has motivated a variety of theoretical and computational models of motor involvement in action observation (Demiris and Khadhouri, 2005, Friston et al., 2011, Ognibene and Demiris, 2013, Ognibene et al., 2013, Wolpert et al., 2003), see Giese and Rizzolatti (2015) for a review. Our model significantly advances the state of the art by assigning the motor system a role in hypothesis testing during action observation, too.…”
Section: Discussionmentioning
confidence: 99%
“…In the social domain, support for this view comes from a variety of sources, including studies of motor activation during action observation, or interference effects between observed and performed actions (Aglioti et al., 2008, Cross et al., 2006, Kilner et al., 2003, Umiltà et al., 2001), see Kilner and Lemon (2013) for a review. This sort of evidence has motivated a variety of theoretical and computational models of motor involvement in action observation (Demiris and Khadhouri, 2005, Friston et al., 2011, Ognibene and Demiris, 2013, Ognibene et al., 2013, Wolpert et al., 2003), see Giese and Rizzolatti (2015) for a review. Our model significantly advances the state of the art by assigning the motor system a role in hypothesis testing during action observation, too.…”
Section: Discussionmentioning
confidence: 99%
“…Future research may consider acquiring the gaze sequences from the perspective of the worker. This approach may be beneficial in developing an autonomous robotic assistant (Ognibene and Demiris, 2013 ; Ognibene et al, 2013 ) that can leverage its onboard camera to obtain the different items human users gaze toward. Future work may also compare the performance of human observers and the types of errors they make to those of our machine learning model.…”
Section: Discussionmentioning
confidence: 99%
“…To this end, we collected data of dyadic interactions in which a “customer” and a “worker” engaged in a sandwich-making task and analyzed how the customers' gaze patterns indicated their intentions, which we characterized as the ingredients they chose. Conceptually, this interaction can be characterized as involving three processes: (1) the customer looks at possible ingredients to make a decision about which ingredient to request (Hayhoe and Ballard, 2014 ); (2) the customer signals their decision via behavioral cues (Pezzulo et al, 2013 ); and (3) the worker observes the customer's gaze behaviors to predict their intentions (Doshi and Trivedi, 2009 ; Ognibene and Demiris, 2013 ; Ognibene et al, 2013 ). Our goal is to quantify how much information the customer's gaze provides about their intentions in the first two processes.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Associating intentions and mental states to agents' behavior may encourage the observer to actively search for cues, such as unnoticed affordances, to acquire a better understanding of a given situation and enable more precise predictions [17]. Active perception may be necessary to eliminate the passive nature of robots' exploration and understanding of the environment and agents, which is in contrast with the ecological behavior seen in humans and limits the quality of human-robot interactions.…”
Section: A Tom For (Active) Perceptionmentioning
confidence: 99%