Research on speech processing is often focused on a phenomenon termed "entrainment", whereby the cortex shadows rhythmic acoustic information with oscillatory activity. Entrainment has been observed to a range of rhythms present in speech; in addition, synchronicity with abstract information (e.g. syntactic structures) has been observed. Entrainment accounts face two challenges: First, speech is not exactly rhythmic; second, synchronicity with representations that lack a clear acoustic counterpart has been described. We propose that apparent entrainment does not always result from acoustic information. Rather, internal rhythms may have functionalities in the generation of abstract representations and predictions. While acoustics may often provide punctate opportunities for entrainment, internal rhythms may also live a life of their own to infer and predict information, leading to intrinsic synchronicitynot to be counted as entrainment. This possibility may open up new research avenues in the psycho-and neurolinguistic study of language processing and language development.
Dynamic treatment regimes are fast becoming an important part of medicine, with the corresponding change in emphasis from treatment of the disease to treatment of the individual patient. Because of the limited number of trials to evaluate personally tailored treatment sequences, inferring optimal treatment regimes from observational data has increased importance. Q-learning is a popular method for estimating the optimal treatment regime, originally in randomized trials but more recently also in observational data. Previous applications of Q-learning have largely been restricted to continuous utility end-points with linear relationships. This paper is the first both to extend the framework to discrete utilities and to implement the modelling of covariates from linear to more flexible modelling using the generalized additive model (GAM) framework. Simulated data results show that the GAM adapted Q-learning typically outperforms Q-learning with linear models and other frequently-used methods based on propensity scores in terms of coverage and bias/MSE. This represents a promising step towards a more fully general Q-learning approach to estimating optimal dynamic treatment regimes.
Ample evidence shows that the human brain carefully tracks acoustic temporal regularities in the input, perhaps by entraining cortical neural oscillations to the rate of the stimulation. To what extent the entrained oscillatory activity influences processing of upcoming auditory events remains debated. Here, we revisit a critical finding from Hickok et al. (2015) that demonstrated a clear impact of auditory entrainment on subsequent auditory detection. Participants were asked to detect tones embedded in stationary noise, following a noise that was amplitude modulated at 3 Hz. Tonal targets occurred at various phases relative to the preceding noise modulation. The original study (N = 5) showed that the detectability of the tones (presented at near‐threshold intensity) fluctuated cyclically at the same rate as the preceding noise modulation. We conducted an exact replication of the original paradigm (N = 23) and a conceptual replication using a shorter experimental procedure (N = 24). Neither experiment revealed significant entrainment effects at the group level. A restricted analysis on the subset of participants (36%) who did show the entrainment effect revealed no consistent phase alignment between detection facilitation and the preceding rhythmic modulation. Interestingly, both experiments showed group‐wide presence of a non‐cyclic behavioural pattern, wherein participants' detection of the tonal targets was lower at early and late time points of the target period. The two experiments highlight both the sensitivity of the task to elicit oscillatory entrainment and the striking individual variability in performance.
Could meaning be read from acoustics, or from the refraction rate of pyramidal cells innervated by the cochlea, everyone would be an omniglot. Speech does not contain sufficient acoustic cues to identify linguistic units such as morphemes, words, and phrases without prior knowledge. Our target article (Meyer, L., Sun, Y., & Martin, A. E. (2019). Synchronous, but not entrained: Exogenous and endogenous cortical rhythms of speech and language processing. Language,
How native and non‐native languages are represented in the brain is one of the most important questions in neurolinguistics. Much research has found that the similarity in neural activity of native and non‐native languages are influenced by factors such as age of acquisition, language proficiency, and language exposure in the non‐native language. Nevertheless, it is still unclear how the similarity between native and non‐native languages in orthographic transparency, a key factor that affects the cognitive and neural mechanisms of phonological access, modulates the cross‐language similarity in neural activation and which brain regions show the modulatory effects of language distance in orthographic transparency. To address these questions, the present study used representational similarity analysis (RSA) to precisely estimate the neural pattern similarity between native language and two non‐native languages in Uyghur‐Chinese‐English trilinguals, whose third language (i.e., English) was more similar to the native language (i.e., Uyghur) in orthography than to their second language (i.e., Chinese). Behavioral results revealed that subjects responded faster to words in the non‐native language with more similar orthography to their native language in the word naming task. More importantly, RSA revealed greater neural pattern similarity between Uyghur and English than between Uyghur and Chinese in select brain areas for phonological processing, especially in the left hemisphere. Further analysis confirmed that those brain regions represented phonological information. These results provide direct neuroimaging evidence for the modulatory effect of language distance in orthographic transparency on cross‐language pattern similarity between native and non‐native languages during word reading.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.