2017
DOI: 10.1523/jneurosci.2496-16.2017
|View full text |Cite
|
Sign up to set email alerts
|

Neural Representations of Observed Actions Generalize across Static and Dynamic Visual Input

Abstract: People interact with entities in the environment in distinct and categorizable ways (e.g., is). We can recognize these action categories across variations in actors, objects, and settings; moreover, we can recognize them from both dynamic and static visual input. However, the neural systems that support action recognition across these perceptual differences are unclear. Here, we used multivoxel pattern analysis of fMRI data to identify brain regions that support visual action categorization in a format-indepen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

11
62
2

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 69 publications
(75 citation statements)
references
References 105 publications
11
62
2
Order By: Relevance
“…However, common verbs are not always commonly-seen actions (e.g., "can"), and many commonly-seen actions are not often talked about (e.g., "chopping vegetables"). Further, using verbs (particularly without considering the type of verb) can unintentionally constrain one's hypothesis space by implying that actions that look very different but are described by the same verb ("pushing a button" vs. "pushing a person") share a neural representation (Hafri et al, 2017). To understand action observation at a more perceptual, rather than conceptual level, we instead sampled our action stimuli based on common human experiences, using the American Time Use Survey as a guide (see Methods; cf.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…However, common verbs are not always commonly-seen actions (e.g., "can"), and many commonly-seen actions are not often talked about (e.g., "chopping vegetables"). Further, using verbs (particularly without considering the type of verb) can unintentionally constrain one's hypothesis space by implying that actions that look very different but are described by the same verb ("pushing a button" vs. "pushing a person") share a neural representation (Hafri et al, 2017). To understand action observation at a more perceptual, rather than conceptual level, we instead sampled our action stimuli based on common human experiences, using the American Time Use Survey as a guide (see Methods; cf.…”
mentioning
confidence: 99%
“…Therefore, it is critical to consider more widespread action responses across broad swathes of occipitotemporal and parietal cortices (e.g. Hafri et al, 2017;Wurm & Caramazza, 2017). Given this, we used a novel method to identify voxels that reliably differentiate among different actions (Tarhan & Konkle, under review), rather than constraining our analyses to specific regions of interest emphasized in the literature.…”
mentioning
confidence: 99%
“…static/dynamic; verbal/visual) spans the lateral and ventral occipitotemporal cortex (e.g. Hafri et al, 2017;O'Toole et al, 2014;Wurm & Lingnau, 2015; for review see Lingnau & Downing, 2015).…”
Section: Discussionmentioning
confidence: 99%
“…This conception has recently been challenged by demonstrating that posterior temporal cortex encodes action representations (e.g. of opening and closing) that generalize across a range of perceptual features, such as the body parts (Vannuscorps et al, 2018) and movements used to carry out an action (Wurm and Lingnau, 2015;Vannuscorps et al, 2018), the type of object involved in an action (Wurm and Lingnau, 2015;Vannuscorps et al, 2018), and whether an action is recognized from photographs or videos (Hafri et al, 2017). These findings suggest that temporal cortex encodes action representations that abstract away from various details of a perceived action.…”
Section: Introductionmentioning
confidence: 99%