2018
DOI: 10.1162/jocn_a_01257
|View full text |Cite
|
Sign up to set email alerts
|

Decoding Digits and Dice with Magnetoencephalography: Evidence for a Shared Representation of Magnitude

Abstract: Numerical format describes the way magnitude is conveyed, for example, as a digit ("3") or Roman numeral ("III"). In the field of numerical cognition, there is an ongoing debate of whether magnitude representation is independent of numerical format. Here, we examine the time course of magnitude processing when using different symbolic formats. We presented participants with a series of digits and dice patterns corresponding to the magnitudes of 1 to 6 while performing a 1-back task on magnitude. Magnetoencepha… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

10
20
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7

Relationship

3
4

Authors

Journals

citations
Cited by 29 publications
(30 citation statements)
references
References 50 publications
10
20
0
Order By: Relevance
“…Using representational similarity analysis (RSA) (Kriegeskorte and Kievit, 2013), we replicated the previous finding (Spitzer et al, 2017; Teichmann et al, 2018) that patterns of neural activity across the scalp from ~100 ms onwards were increasingly dissimilar for numbers with more divergent magnitude, that is codes for ‘3’ and ‘5’ were more dissimilar than those for ‘3’ and ‘4’ (Figure 1B, green line). This occurred irrespective of task framing (report higher vs. lower average) and category (orange vs. blue numbers), suggesting that neural signals encoded an abstract representation of magnitude and not solely a decision-related quantity such as choice certainty (Spitzer et al, 2017).…”
Section: Resultssupporting
confidence: 81%
See 1 more Smart Citation
“…Using representational similarity analysis (RSA) (Kriegeskorte and Kievit, 2013), we replicated the previous finding (Spitzer et al, 2017; Teichmann et al, 2018) that patterns of neural activity across the scalp from ~100 ms onwards were increasingly dissimilar for numbers with more divergent magnitude, that is codes for ‘3’ and ‘5’ were more dissimilar than those for ‘3’ and ‘4’ (Figure 1B, green line). This occurred irrespective of task framing (report higher vs. lower average) and category (orange vs. blue numbers), suggesting that neural signals encoded an abstract representation of magnitude and not solely a decision-related quantity such as choice certainty (Spitzer et al, 2017).…”
Section: Resultssupporting
confidence: 81%
“…In scalp M/EEG signals, neural patterns evoked by Arabic digits vary continuously with numerical distance, such that multivariate signals for ‘3’ are more similar to those for ‘4’ than ‘5’. (Spitzer et al, 2017; Teichmann et al, 2018). In scalp M/EEG signals, neural patterns evoked by Arabic digits vary continuously with numerical distance, such that multivariate signals for ‘3’ are more similar to those for ‘4’ than ‘5’.…”
Section: Introductionmentioning
confidence: 99%
“…Some low-level visual information may have been lost in the scrambling process, such as shape and curvature information, which is a strong cue for animacy (Levin, Takarae, Miner, & Keil, 2001;Schmidt, Hegele, & Fleming, 2017;Zachariou, Giacco, Ungerleider, & Yue, 2018). In MEG and EEG decoding studies, classification can be strongly driven by differences in object shape (Proklova et al, 2019), and silhouette similarity is often a strong predictor of the similarities between the earliest neural responses (Carlson et al, 2013;Grootswagers et al, 2019;Teichmann, Grootswagers, Carlson, & Rich, 2018;Wardle, Kriegeskorte, Grootswagers, Khaligh-Razavi, & Carlson, 2016). It is also important to note that while the texform images are not recognisable at the individual level, they can still be categorised (e.g., for animacy) above chance (Long et al, 2017).…”
Section: Discussionmentioning
confidence: 99%
“…We also used three low-level image feature control models (Figure 2, third row), which were created by correlating the vectorized experimental images. The models consisted of an image silhouette similarity model, which is based on the binary alpha layer of the stimuli and is a good predictor of differences in brain responses (Carlson et al, 2011;Teichmann, Grootswagers, Carlson, & Rich, 2018;Wardle, Kriegeskorte, Grootswagers, Khaligh-Razavi, & Carlson, 2016)), a model based on the CIELAB-colour values of the stimuli, and a model based on the difference in luminance of the stimuli.…”
Section: Representational Similarity Analysismentioning
confidence: 99%