The auditory processing of physical stimulus features can be measured by the mismatch negativity. Past studies have shown that higher-order stimulus features also elicit a mismatch negativity. In some studies, a second component, termed late mismatch negativity, has been observed; yet the functional significance of this component remains unclear. We tested two-tone-pattern stimuli following an abstract rule in healthy adults. As expected, the tone pattern elicited a significant mismatch negativity peaking at 146 ms but a significant late mismatch negativity at around 340 ms was also observed. These findings show that the violation of an abstract rule elicits an early and late mismatch negativity. The late mismatch negativity might be triggered on the basis of auditory rule extraction processes and reflect a transfer of rules to the long-term memory.
This report has two main purposes. First, we combine well-known analytical approaches to conduct a comprehensive assessment of agreement and correlation of rating-pairs and to dis-entangle these often confused concepts, providing a best-practice example on concrete data and a tutorial for future reference. Second, we explore whether a screening questionnaire developed for use with parents can be reliably employed with daycare teachers when assessing early expressive vocabulary. A total of 53 vocabulary rating pairs (34 parent–teacher and 19 mother–father pairs) collected for two-year-old children (12 bilingual) are evaluated. First, inter-rater reliability both within and across subgroups is assessed using the intra-class correlation coefficient (ICC). Next, based on this analysis of reliability and on the test-retest reliability of the employed tool, inter-rater agreement is analyzed, magnitude and direction of rating differences are considered. Finally, Pearson correlation coefficients of standardized vocabulary scores are calculated and compared across subgroups. The results underline the necessity to distinguish between reliability measures, agreement and correlation. They also demonstrate the impact of the employed reliability on agreement evaluations. This study provides evidence that parent–teacher ratings of children's early vocabulary can achieve agreement and correlation comparable to those of mother–father ratings on the assessed vocabulary scale. Bilingualism of the evaluated child decreased the likelihood of raters' agreement. We conclude that future reports of agreement, correlation and reliability of ratings will benefit from better definition of terms and stricter methodological approaches. The methodological tutorial provided here holds the potential to increase comparability across empirical reports and can help improve research practices and knowledge transfer to educational and therapeutic settings.
This article investigates the cross-linguistic comparability of the newly developed lexical assessment tool Cross-linguistic Lexical Tasks (LITMUS-CLT). LITMUS-CLT is a part the Language Impairment Testing in Multilingual Settings (LITMUS) battery (Armon-Lotem, de Jong & Meir, 2015). Here we analyse results on receptive and expressive word knowledge tasks for nouns and verbs across 17 languages from eight different language families: Baltic (Lithuanian), Bantu (isiXhosa), Finnic (Finnish), Germanic (Afrikaans, British English, South African English, German, Luxembourgish, Norwegian, Swedish), Romance (Catalan, Italian), Semitic (Hebrew), Slavic (Polish, Serbian, Slovak) and Turkic (Turkish). The participants were 639 monolingual children aged 3;0-6;11 living in 15 different countries. Differences in vocabulary size were small between 16 of the languages; but isiXhosa-speaking children knew significantly fewer words than speakers of the other languages. There was a robust effect of word class: accuracy was higher for nouns than verbs. Furthermore, comprehension was more advanced than production. Results are discussed in the context of cross-linguistic comparisons of lexical development in monolingual and bilingual populations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.