2010
DOI: 10.1016/j.neuropsychologia.2009.09.029
|View full text |Cite
|
Sign up to set email alerts
|

Thinking of God moves attention

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

8
102
2
1

Year Published

2015
2015
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 90 publications
(114 citation statements)
references
References 13 publications
8
102
2
1
Order By: Relevance
“…or downward (e.g., "cellar") spatial associations (e.g., Bergen et al, 2007;Chasteen, Burdzy, & Pratt, 2010;Duschig et al, 2012Duschig et al, , 2013Estes et al, 2008;Goodhew et al, 2014;Gozli et al, 2013;Quadflieg et al, 2011;Richardson et al, 2003;Verges & Duffy, 2009;Zhang et al, 2013), maps clearly onto the perceptual matching account described in the introduction: The visual target was described as appearing in either the "matching" location or a "mismatching" location. Thus, our theoretical conceptualization treated the congruence between the spatial association of the cue and the physical location of the target as a categorical factor.…”
Section: A Categorical Model Of Spatial Codingmentioning
confidence: 83%
See 1 more Smart Citation
“…or downward (e.g., "cellar") spatial associations (e.g., Bergen et al, 2007;Chasteen, Burdzy, & Pratt, 2010;Duschig et al, 2012Duschig et al, , 2013Estes et al, 2008;Goodhew et al, 2014;Gozli et al, 2013;Quadflieg et al, 2011;Richardson et al, 2003;Verges & Duffy, 2009;Zhang et al, 2013), maps clearly onto the perceptual matching account described in the introduction: The visual target was described as appearing in either the "matching" location or a "mismatching" location. Thus, our theoretical conceptualization treated the congruence between the spatial association of the cue and the physical location of the target as a categorical factor.…”
Section: A Categorical Model Of Spatial Codingmentioning
confidence: 83%
“…In contrast, a few recent studies have used a detection task instead, whereby participants simply pressed a button as soon as they detected the presence of a visual target (regardless of its identity). Interestingly, those studies found that spatial cue words facilitated target detection at their associated location (Chasteen et al, 2010;Duschig et al, 2012;Gozli et al, 2013). The matching account may also explain this facilitated target detection, in that the detection task does not require an object code because the target's identity is not relevant to responding.…”
Section: A Graded Model Of Spatial Coding and Interferencementioning
confidence: 99%
“…Importantly, this occurred even though these symbols were entirely irrelevant to the detection task, and observers were explicitly told that the symbols did not predict the location of the upcoming target (see also Ristic & Kingstone, 2006Ristic, Landry, & Kingstone, 2012). Similar findings have been observed for temporal words (e.g., tomorrow, yesterday; Weger & Pratt, 2008), words relating to concrete concepts (e.g., head, foot; Estes, Verges, & Barsalou, 2008), words relating to abstract concepts (e.g., god, devil ;Chasteen, Burdzy, & Pratt, 2010), pictures relating to abstract concepts (e.g., liberal, conservative ;Mills, Smith, Hibbing, & Dodd, 2015), numbers (Fischer, Castel, Dodd, & Pratt, 2003), and letters (Dodd, Van der Stigchel, Leghari, Fung, & Kingstone, 2008). Taken together, these findings indicate that a broad range of visual symbols can produce unintentional shifts of attention (but see Fattorini, Pinto, Rotondaro, & Doricchi, 2015).…”
mentioning
confidence: 56%
“…Of these, 364 were collated from items used in previous studies (Ansorge, et al, 2013;Chasteen, et al, 2010;Dudschig, et al, 2013;Estes, et al, 2008;Goodhew, McGaw, & Kidd, 2014;Gozli, Chasteen, & Pratt, 2013;Gozli, Chow, et al, 2013;Meier & Robinson, 2004;Setic & Domijan, 2007), and were items classified as having associations with up, down, or no clear vertical spatial association (neutral). A further 134 items were included that we developed and selected as likely to have associations with up, down, or no clear vertical spatial associations.…”
Section: Item Selectionmentioning
confidence: 99%
“…Furthermore, humans appear to draw on concrete spatial layouts in order to describe and represent concepts (e.g., Boroditsky, Fuhrman, & McCormick, 2011). For example, English speakers describe someone who is sad as down, describe improvement as things looking up, and we look forward to the future or back to the past.A growing body of studies documents the entwined relationship between concepts and space, in particular, how activating word meaning can systematically shift visual attention in space (e.g., Ansorge, Khalid, & Konig, 2013;Chasteen, Burdzy, & Pratt, 2010;Dudschig, De la Vega, & Kaup, 2015;Dudschig, Souman, Lachmair, de la Vega, & Kaup, 2013;Estes, Verges, & Barsalou, 2008;Fischer, Castel, Dodd, & Pratt, 2003; Gozli, Chow, Chasteen, & Pratt, 2013;Louwerse & Jeuniaux, 2010;Meier & Robinson, 2004;Santiago, Lupianez, Perez, & Funes, 2007;Setic & Domijan, 2007;Weger & Pratt, 2008;Zwaan & Yaxley, 2003). For example, after reading a word associated with up (such as Bsun^or Bjoy^), participants are faster to respond to subsequent visual targets above the center of the screen and slower to respond to targets below the center, whereas the reverse is true after reading a word associated with down (such as Bbasement^or Bbleak^).…”
mentioning
confidence: 99%