2021
DOI: 10.1073/pnas.2025646118
|View full text |Cite
|
Sign up to set email alerts
|

How top-down and bottom-up attention modulate risky choice

Abstract: We examine how bottom-up (or stimulus-driven) and top-down (or goal-driven) processes govern the distribution of attention in risky choice. In three experiments, participants chose between a certain payoff and the chance of receiving a payoff drawn randomly from an array of eight numbers. We tested the hypothesis that initial attention is driven by perceptual properties of the stimulus (e.g., font size of the numbers), but subsequent choice is goal-driven (e.g., win the best outcome). Two experiments in which … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
20
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 28 publications
(32 citation statements)
references
References 32 publications
0
20
0
Order By: Relevance
“…One potential avenue is to contrast fixations to salient features early and late in learning. Studies have shown that overt attention initially orients to salient features, but that these effects can be overcome by increasing endogenous covert attention to task-relevant dimensions (Theeuwes, 2010; Vanunu et al, 2021). With AARM’s specification for unconstrained total attention (see the Attention Is Not a Zero-Sum Game section), it would be possible to specify a different baseline attention value for each dimension.…”
Section: Discussionmentioning
confidence: 99%
“…One potential avenue is to contrast fixations to salient features early and late in learning. Studies have shown that overt attention initially orients to salient features, but that these effects can be overcome by increasing endogenous covert attention to task-relevant dimensions (Theeuwes, 2010; Vanunu et al, 2021). With AARM’s specification for unconstrained total attention (see the Attention Is Not a Zero-Sum Game section), it would be possible to specify a different baseline attention value for each dimension.…”
Section: Discussionmentioning
confidence: 99%
“…However, we did not measure direction and level of attention. Future studies could provide greater insight into these covert attentional processes using eye-tracking and other process-tracing techniques [ 34 , 99 , 100 ]. Advances in the joint modeling of behavioral and neural data [e.g., 10 , 101 ] may also provide novel avenues for elucidating the cognitive processes involved.…”
Section: Discussionmentioning
confidence: 99%
“…eye-tracking and other process-tracing techniques [34,99,100]. Advances in the joint modeling of behavioral and neural data [e.g., 10,101] may also provide novel avenues for elucidating the cognitive processes involved.…”
Section: Plos Onementioning
confidence: 99%
“…Because the quantity of information in many decisions far outstrips an individual's information processing capacity, selective attention is required to maintain representations of information one piece at a time, essentially highlighting different frames at different times during choice (Kiyonaga & Egner, 2013;Moore & Zirnsak, 2017;Myers, Stokes, & Nobre, 2017;Smith & Krajbich, 2019). While this can theoretically result in a process of sequential frame selection using rational goal-driven attention, attention is also frequently exogenously constrained by the environment: What is attended is as often as not stimulus-driven as opposed to goaldirected (Corbetta & Shulman, 2002;Vanunu, Hotaling, Le Pelley, & Newell, 2021). Importantly, these attentional processes may interact in dynamic ways over time: the decision context primes particular frames of evaluation (Diederich & Trueblood, 2018;Maier, Raja Beharelle, Polanía, Ruff, & Hare, 2020), prior frames differentially enhance and constrain the accessibility of subsequent framings (Johnson, Häubl, & Keinan, 2007;Nook, Satpute, & Ochsner, 2021), and executed decisions frame and bias post-choice evaluation (Chaxel, Russo, & Kerimi, 2013;Navajas, Bahrami, & Latham, 2016).…”
Section: Introductionmentioning
confidence: 99%