Despite the numerous examples of anticipatory cognitive processes at micro and macro levels in many animal species, the idea that anticipation of specific words plays an integral role in real-time language processing has been contentious. Here we exploited a phonological regularity of English indefinite articles ('an' precedes nouns beginning with vowel sounds, whereas 'a' precedes nouns beginning with consonant sounds) in combination with event-related brain potential recordings from the human scalp to show that readers' brains can pre-activate individual words in a graded fashion to a degree that can be estimated from the probability that each word is given as a continuation for a sentence fragment offline. These findings are evidence that readers use the words in a sentence (as cues to their world knowledge) to estimate relative likelihoods for upcoming words.
Event-related potentials (ERPs) and magnetic fields (ERFs) are typically analyzed via ANOVAs on mean activity in a priori windows. Advances in computing power and statistics have produced an alternative, mass univariate analyses consisting of thousands of statistical tests and powerful corrections for multiple comparisons. Such analyses are most useful when one has little a priori knowledge of effect locations or latencies, and for delineating effect boundaries. Mass univariate analyses complement and, at times, obviate traditional analyses. Here we review this approach as applied to ERP/ERF data and four methods for multiple comparison correction: strong control of the family-wise error rate (FWER) via permutation tests, weak control of FWER via cluster-based permutation tests, false discovery rate control, and control of the generalized FWER. We end with recommendations for their use and introduce free MATLAB software for their implementation.
Recent research has demonstrated that knowledge of real-world eventsplays an important role inguiding online language comprehension. The present study addresses the scope of event knowledge activation during the course of comprehension, specifically investigating whether activation is limited to those knowledge elements that align with the local linguistic context.The present study addresses this issue by analyzing event-related brain potentials (ERPs) recorded as participants read brief scenariosdescribing typical real-world events. Experiment 1 demonstratesthat a contextually anomalous word elicits a reduced N400 if it is generally related to the described event, even when controlling for the degree of association of this word with individual words in the preceding context and with the expected continuation. Experiment 2 shows that this effect disappears when the discourse context is removed.These findings demonstrate that during the course of incremental comprehension, comprehenders activate general knowledge about the described event, even at points at which this knowledge would constitute an anomalous continuation of the linguistic stream. Generalized event knowledge activationcontributes to mental representations of described events, is immediately available to influence language processing, and likely drives linguistic expectancy generation.
Mass univariate analysis is a relatively new approach for the study of ERPs/ERFs. It consists of many statistical tests and one of several powerful corrections for multiple comparisons. Multiple comparison corrections differ in their power and permissiveness. Moreover, some methods are not guaranteed to work or may be overly sensitive to uninteresting deviations from the null hypothesis. Here we report the results of simulations assessing the accuracy, permissiveness, and power of six popular multiple comparison corrections (permutation-based control of the family-wise error rate: FWER, weak control of FWER via cluster-based permutation tests, permutation based control of the generalized FWER, and three false discovery rate control procedures) using realistic ERP data. In addition, we look at the sensitivity of permutation tests to differences in population variance. These results will help researchers apply and interpret these procedures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.