2022
DOI: 10.1037/xhp0001031
|View full text |Cite
|
Sign up to set email alerts
|

Thematic object pairs produce stronger and faster grouping than taxonomic pairs.

Abstract: Studies of visual object processing have long appreciated that semantic meaning is automatically extracted. However, “semantics” has largely been defined as a unitary concept that describes all meaning-based information. In contrast, the concept literature divides semantics into taxonomic and thematic types. Taxonomic relationships reflect categorization by similarities (e.g., dog—wolf); thematic groups are based on complementary relationships (e.g., swimsuit—goggles). Critically, thematic relationships are le… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 68 publications
(127 reference statements)
0
5
0
Order By: Relevance
“…Similarly, images with heterogenous object semantics elicit more contraction in memory, compared to scenes that contain the same amount of shared semantic label objects (Greene & Trivedi, 2022). This effect is most likely due to related objects being automatically attended together leading to more distributed attention over the scene image (Wei et al, 2018;Mack & Eckstein, 2011;Nah et al, 2021;Nah & Geng, 2022). These results suggest that the objects participants attend to during perception is an important factor determining the trend and degree of transformation.…”
Section: Introductionmentioning
confidence: 84%
“…Similarly, images with heterogenous object semantics elicit more contraction in memory, compared to scenes that contain the same amount of shared semantic label objects (Greene & Trivedi, 2022). This effect is most likely due to related objects being automatically attended together leading to more distributed attention over the scene image (Wei et al, 2018;Mack & Eckstein, 2011;Nah et al, 2021;Nah & Geng, 2022). These results suggest that the objects participants attend to during perception is an important factor determining the trend and degree of transformation.…”
Section: Introductionmentioning
confidence: 84%
“…This stands in contrast to the considerable evidence showing that preparatory attention is flexible, highly adaptive, and sensitive to changing contexts. Many behavioral studies have shown that scene structure, statistically co-occurring object pairs, and large, stable, predictive “anchor objects” are all used to guide attention and locate smaller objects (Battistoni et al, 2017; Boettcher et al, 2018; Castelhano & Krzys, 2020; Castelhano et al, 2009; Collegio et al, 2019; de Lange et al, 2018; Gayet & Peelen, 2022; Hall & Geng, 2024; Helbing et al, 2022; Josephs et al, 2016; Mack & Eckstein, 2011; Malcolm & Shomstein, 2015; Nah & Geng, 2022; Peelen et al, 2024; Vo et al, 2019; Yu et al, 2023; Zhou & Geng, 2024). For example, in a previous behavioral study, we showed that when the target is hard to find, scene information is used as a proxy in the target template to guide attention toward the likely target location more efficiently (Zhou & Geng, 2024).…”
Section: Discussionmentioning
confidence: 99%
“…Many scenes in daily life are meaningfully defined by the specific constellations of objects they contain (e.g., a knife and fork either side of a plate, or a sofa facing a television). Our encounters with such multi-object arrangements are so ubiquitous that the visual system is remarkably sensitive to statistical regularities within these groups, with both the likelihood of objects’ co-occurrence and their configural arrangement influencing the way we search for, attend to, and remember familiar objects (Biederman, 1972; Biederman et al, 1973; Kaiser et al, 2015; Võ et al, 2019; Nah and Geng, 2022).…”
Section: Introductionmentioning
confidence: 99%