2013
DOI: 10.1037/a0027533
|View full text |Cite
|
Sign up to set email alerts
|

Incidental and context-responsive activation of structure- and function-based action features during object identification.

Abstract: Previous studies suggest that action representations are activated during object processing, even when task-irrelevant. In addition, there is evidence that lexical-semantic context may affect such activation during object processing. Finally, prior work from our laboratory and others indicates that function-based (“use”) and structure-based (“move”) action subtypes may differ in their activation characteristics. Most studies assessing such effects, however, have required manual object-relevant motor responses,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

9
73
1

Year Published

2014
2014
2018
2018

Publication Types

Select...
7
1

Relationship

3
5

Authors

Journals

citations
Cited by 56 publications
(86 citation statements)
references
References 77 publications
(127 reference statements)
9
73
1
Order By: Relevance
“…Unreachable objects may not evoke motor affordances at all, as previous EEG findings from Wamain et al (2016) suggest, or unreachable objects may only evoke functional affordances, since the involvement of functional gestures in object visual representations may be more stable than that of structural gestures (Buxbaum & Kalénine, 2010;Lee et al, 2013). The present behavioral results cannot disentangle between the two alternatives.…”
Section: Discussioncontrasting
confidence: 63%
See 1 more Smart Citation
“…Unreachable objects may not evoke motor affordances at all, as previous EEG findings from Wamain et al (2016) suggest, or unreachable objects may only evoke functional affordances, since the involvement of functional gestures in object visual representations may be more stable than that of structural gestures (Buxbaum & Kalénine, 2010;Lee et al, 2013). The present behavioral results cannot disentangle between the two alternatives.…”
Section: Discussioncontrasting
confidence: 63%
“…Perceived objects reactivate many different action representations in a flexible way (Borghi & Riggio, 2015;Natraj, Pella, Borghi, & Wheaton, 2015;Thill, Caligiore, Borghi, Ziemke, & Baldassarre, 2013). In particular, perceived objects may evoke both structural and functional affordances (Bub, Masson, & Cree, 2008;Lee, Middleton, Mirman, Kalénine, & Buxbaum, 2013). Moreover, the relative importance of structural and functional gesture activation depends on visual context and action goals.…”
Section: Introductionmentioning
confidence: 97%
“…Recently, eye-tracking studies have demonstrated that visual attention is diverted to distractor objects sharing action features with targets even in the absence of an overt action task. For example, when searching for a named object, participants look longer at distractors sharing a hand posture (e.g., pinch, palm, clench, or poke) with the target than at actionunrelated distractors (Lee et al, 2013). Moreover, participants with deficits in action recognition and skilled object use show a reduction and delay in this competition pattern (Myung et al, 2010;Lee et al, submitted).…”
Section: Discussionmentioning
confidence: 99%
“…One explanation of these results is that the privileged status of thematic relations for manipulable artifacts reflects the action knowledge that we have about these objects. Indeed, a growing number of studies suggest that action knowledge is a component of the semantic representations of manipulable artifacts (Helbig et al, 2006(Helbig et al, , 2010Myung et al, 2006Myung et al, , 2010Campanella and Shallice, 2011;Lee et al, 2013). On most accounts of semantic memory, these action features of objects are represented separately from other kinds of semantic features, like color, shape, and typical location (e.g., Allport, 1985;Warrington and McCarthy, 1987;McRae et al, 1997;Barsalou, 1999).…”
Section: Introductionmentioning
confidence: 99%
“…Incidental retrieval of action information during object processing has been found even in the absence of action preparation, imagery, or execution (Green and Hummel, 2006; Harris et al, 2012; Helbig et al, 2006; Roberts and Humphreys, 2011; Wamain et al, 2014). For example, when searching for an object in an array, participants fixate more on related distractors sharing manipulation actions with the targets than on unrelated distractors (Lee et al, 2013; Myung et al, 2006). …”
Section: Introductionmentioning
confidence: 99%