2008
DOI: 10.1162/jocn.2008.20082
|View full text |Cite
|
Sign up to set email alerts
|

Class Information Predicts Activation by Object Fragments in Human Object Areas

Abstract: Object-related areas in the ventral visual system in humans are known from imaging studies to be preferentially activated by object images compared with noise or texture patterns. It is unknown, however, which features of the object images are extracted and represented in these areas. Here we tested the extent to which the representation of visual classes used object fragments selected by maximizing the information delivered about the class. We tested functional magnetic resonance imaging blood oxygenation lev… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

10
33
0

Year Published

2009
2009
2016
2016

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 41 publications
(43 citation statements)
references
References 44 publications
10
33
0
Order By: Relevance
“…the basic representational elements are entire faces or at least large and complex object fragments. However, certainly at various stages of these representations, local image elements, such as informative “fragments”, also play an important role (Lerner et al, 2008). Similarly, recent evidence from single unit recordings in monkeys point to important contributions of certain face feature parameters such as iris size to the neuronal activation (Freiwald et al, 2009).…”
Section: Discussionmentioning
confidence: 99%
“…the basic representational elements are entire faces or at least large and complex object fragments. However, certainly at various stages of these representations, local image elements, such as informative “fragments”, also play an important role (Lerner et al, 2008). Similarly, recent evidence from single unit recordings in monkeys point to important contributions of certain face feature parameters such as iris size to the neuronal activation (Freiwald et al, 2009).…”
Section: Discussionmentioning
confidence: 99%
“…The optimal solution would enable representation of both object category (largest component of variance) and object identity. Such a solution might be implemented by feature selectivity at the columnar level (Tanaka, 1996) which is tuned to those object features that are most informative for discriminating categories as well as exemplars (Sigala and Logothetis, 2002;Ullman et al, 2002;Lerner et al, 2008), while untangling category and exemplar distinctions from accidental properties in multivariate space (DiCarlo and Cox, 2007).…”
Section: In What Sense Is the Representation Categorical? And In Whatmentioning
confidence: 99%
“…Although some studies have implied that the involvement of OFA in face representation is limited to processing of facial features (Liu et al, 2009), or their spatial configuration (Rotshtein et al, 2007; Rhodes et al, 2009), others have suggested the involvement of OFA in higher levels of facial processing (Chen et al, 2007) including its necessity for facial recognition, as patients with a lesion overlap in the right OFA exhibit face recognition deficits (Rossion et al, 2003; Steeves et al, 2006, 2009). Thus, the face-related activation in the OFA region may arise from the processing of low-level face or shape features that face images contain (Lerner et al, 2008; Dakin and Watt, 2009; Liu et al, 2009), or alternatively, this region may contain neuronal representations that encode face identity (independently of low-level features).…”
Section: Introductionmentioning
confidence: 99%
“…The LO region (Malach et al, 1995; Grill-Spector et al, 1998a; Lerner et al, 2001) served as a control site since it is non-selectively activated by objects and faces (Avidan et al, 2002b; Fang et al, 2007; Gilaie-Dotan et al, 2008, 2010; Lerner et al, 2008). Furthermore it is in the vicinity of OFA yet functionally separable from OFA as has been previously shown in TMS studies (Pitcher et al, 2009).…”
Section: Introductionmentioning
confidence: 99%