2023
DOI: 10.1017/9781009003414
|View full text |Cite
|
Sign up to set email alerts
|

Attending to Moving Objects

Abstract: Our minds are severely limited in how much information they can extensively process, in spite of being massively parallel at the visual end. When people attempt to track moving objects, only a limited number can be tracked, which varies with display parameters. Associated experiments indicate that spatial selection and updating has higher capacity than selection and updating of features such as color and shape, and is mediated by processes specific to each cerebral hemisphere, such that each hemifield has its … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 244 publications
(316 reference statements)
0
5
0
Order By: Relevance
“…However, our findings do align well with two recent lines of work in attention and working memory. First, people's ability to extrapolate motion in perceptual tracking was recently suggested to have a capacity limit of only one object (Holcombe, 2023), perhaps due to challenges of physical simulation (Lau and Brady, 2020). Second, updating active representations was argued to depend on sequentially loading objects, one at a time, into the 'focus of attention' (Oberauer, 2002).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, our findings do align well with two recent lines of work in attention and working memory. First, people's ability to extrapolate motion in perceptual tracking was recently suggested to have a capacity limit of only one object (Holcombe, 2023), perhaps due to challenges of physical simulation (Lau and Brady, 2020). Second, updating active representations was argued to depend on sequentially loading objects, one at a time, into the 'focus of attention' (Oberauer, 2002).…”
Section: Discussionmentioning
confidence: 99%
“…There are ongoing, important debates regarding the exact limitations people have in tracking the objects they see (e.g. Feria, 2013;Franconeri et al, 2010;Lovett et al, 2019), including whether mental capacity is rigid or dynamic (for a recent review, see Holcombe, 2023). Our focus here, however, is not on vision, but the imagination.…”
Section: Introducionmentioning
confidence: 99%
“…Holcombe ( 2023 ) has suggested that human performance in multiple object tracking may be a result of two systems working together—(i) velocity-using unitary cognition, that is, a low-capacity process referred to as System 2 in the broader literature, along with (ii) a nearest-neighbour-heuristic based high-capacity low-level process. The use of velocity information for position estimation may then be the result of the involvement of System 2.…”
Section: Computational Modelingmentioning
confidence: 99%
“…In order to use velocity information to estimate the location of an object, we need information about its location at at least two points in time. Following Holcombe ( 2022 , 2023 ), we restrict our model to do this only for a single object at any time step. So, whenever a different object needs to be processed by the unitary system, the unitary processing’s existing information will be discarded.…”
Section: Computational Modelingmentioning
confidence: 99%
“…This requirement makes the MIT task more difficult than the MOT task, and the MIT more closely resembles dynamic real-life environments in which all the to-be-tracked objects have unique visual identities. In general, a participant's performance on these attentional tracking tasks usually decreases as the number of targets increase, although the observers' tracking ability depends on numerous factors (see Cavanagh & Alvarez, 2005;Holcombe, 2023;Meyerhoff et al, 2017;Scholl, 2009). Generally, a participant's tracking capacity for the MIT task is smaller than for the MOT task (Horowitz et al, 2007).…”
Section: Introductionmentioning
confidence: 99%