2012
DOI: 10.1016/j.neuroimage.2011.08.038
|View full text |Cite
|
Sign up to set email alerts
|

Squeezing lemons in the bathroom: Contextual information modulates action recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

7
44
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 57 publications
(51 citation statements)
references
References 48 publications
7
44
0
Order By: Relevance
“…No modulations were observed at 80 ms. Importantly, both effects were found in a time window (Ͼ240 ms) where, we can assume, contextual information had already been fully processed (Biederman et al, 1974;Thorpe et al, 1996;Bar et al, 2006;Kveraga et al, 2007).…”
Section: Discussionmentioning
confidence: 96%
See 3 more Smart Citations
“…No modulations were observed at 80 ms. Importantly, both effects were found in a time window (Ͼ240 ms) where, we can assume, contextual information had already been fully processed (Biederman et al, 1974;Thorpe et al, 1996;Bar et al, 2006;Kveraga et al, 2007).…”
Section: Discussionmentioning
confidence: 96%
“…Notably, the last time window of the video clips depicted the closing phase of the movement, when the hand aperture was minimal. It has been previously shown that mirror-like motor facilitation of the FDI muscle decreases during observation of the end posture of an action, where the hand has maximal finger closure (Urgesi et al, 2006(Urgesi et al, , 2010. These findings might reflect this phenomenon, showing that the time course of motor activation triggered by action observation parallels the dynamics of movement execution (Gangitano et al, 2001(Gangitano et al, , 2004Montagna et al, 2005).…”
Section: Grip Analysismentioning
confidence: 89%
See 2 more Smart Citations
“…This was defined as matching the percept of an action to a corresponding action in memory (e.g., Jeannerod 2006). In humans, so-called mirror-neurons contribute to recognising actions and to identifying their goal (Iacoboni et al 2005;Johnson-Frey et al 2003;Kilner, Friston, and Frith 2007;Wurm and Schubotz 2012;Van Overwalle and Baetens 2009). The bartending robot has to rely on computer vision for recognising non-verbal actions.…”
Section: Intention Recognitionmentioning
confidence: 99%