2019
DOI: 10.1101/771196
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Mental operations in rhythm: motor-to-sensory transformation mediates imagined singing

Abstract: What enables us to think verbally? We hypothesized that the interaction between motor and sensory systems induced speech representation without external stimulation or overt articulation. This motor-to-sensory transformation formed the neural basis that enabled us to think verbally. Analogous to the frequency tracking of neural responses to auditory stimuli, we asked participants to imagine singing lyrics of famous songs rhythmically while their neural electro-magnetic signals were recorded using magnetoenceph… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 89 publications
(97 reference statements)
0
2
0
Order By: Relevance
“…Previous studies have suggested that metamodal engagement is a result of top-down processes such as mental imagery rather than bottom-up processes (Lacey et al, 2009). However, given that in our study, subjects in both algorithm groups were equally proficient at recognizing VT stimuli as words, mental-imagery accounts (Borst and Gelder, 2016; Li et al, 2020; Oh et al, 2013; Tian et al, 2018) in this case would predict that both groups should engage auditory perceptual representations in the mid-STG. Yet, we found no evidence that the token-based VT stimuli engaged this area after training in the same way as auditory speech (see also (Siuda-Krzywicka et al, 2016; Striem-Amit et al, 2012)).…”
Section: Discussionmentioning
confidence: 65%
“…Previous studies have suggested that metamodal engagement is a result of top-down processes such as mental imagery rather than bottom-up processes (Lacey et al, 2009). However, given that in our study, subjects in both algorithm groups were equally proficient at recognizing VT stimuli as words, mental-imagery accounts (Borst and Gelder, 2016; Li et al, 2020; Oh et al, 2013; Tian et al, 2018) in this case would predict that both groups should engage auditory perceptual representations in the mid-STG. Yet, we found no evidence that the token-based VT stimuli engaged this area after training in the same way as auditory speech (see also (Siuda-Krzywicka et al, 2016; Striem-Amit et al, 2012)).…”
Section: Discussionmentioning
confidence: 65%
“…This finding aligns well with the notion that the beta band plays an important role in endogenous processes, notably in relation with top-down control, in particular in the context of language (Arnal and Giraud, 2012; Bowers et al, 2019; Fontolan et al, 2014; Pefkou et al, 2017). Although repeating a heard or written word engages automatic, almost reflex, neural routines, imagined speech is a more voluntary action requiring enhanced endogenous control from action planning frontal regions (Buschman et al, 2012; Li et al, 2020; Morillon et al, 2019). These results must however be taken with caution as spurious CFC can result from non-linearity, non-stationarity, and power changes across conditions in the signal (Aru et al, 2015; Hyafil, 2015).…”
Section: Discussionmentioning
confidence: 99%