Abstract:Purpose Search for reliability and validity evidence for the Montreal Communication Evaluation Brief Battery (MEC B) for adults with right brain damage. Methods Three hundred twenty-four healthy adults and 26 adults with right brain damage, aged 19-75 years, with two or more years of education were evaluated with MEC B. The MEC B Battery contains nine tasks that aim to evaluate communicative abilities as discourse, prosody, lexical-semantic and pragmatic process. Two sources of reliability evidence were used:… Show more
“…It is indeed possible that in clinical samples with pragmatic difficulties the correlation between the two tests would be based on a wider range of scores, hence tighter and stronger in magnitude. Finally, it is to note that the items included in the APACS Brief Remote test are not a subset of the items of the APACS test, but rather a new set of items, administered in another modality: the obtained correlation value aligns with coefficients reported in the literature between another full-length battery of pragmatics, namely the MEC, and its shorter version based on a novel set of items, namely MEC B (across MEC scales, .05 < ρ < .69, Casarin et al, 2020). However, we cannot rule out that the APACS Brief Remote and the APACS test have a different granularity in assessing pragmatics, in terms of fine description of the pragmatic profile.…”
Section: Discussionsupporting
confidence: 77%
“…Several tools for rapid cognitive screening (i.e., within a 15-minute duration) are available to use when extensive neuropsychological evaluation is not necessary, and they are considered an optimal choice to monitor the course of an illness or treatment outcome (Mondini et al, 2022). Yet there are no tools for rapid pragmatic assessment (the MEC Brief -available only for Brazilian Portuguese -has still a considerable duration: 25-40 minutes Casarin et al, 2020), and the available ones can hardly be incorporated into routine assessment of patients with suspected pragmatic language disorder. A further issue is represented by the poor availability of tests with alternate forms.…”
Healthcare services require rapid assessment tools, as well as the possibility of using them flexibly in different contexts, such as those experienced during the COVID-19 pandemic, that favor remote interaction over traditional care. These needs become especially challenging when assessing language and communication skills, for which few tools exist. This work aimed to develop and evaluate the psychometric properties of a novel test for the rapid and tele-assessment of pragmatic skills in Italian-speaking individuals, including an alternate form to allow for monitoring and follow-up. Inspired by Gricean pragmatics and modelled after the already validated Assessment of Pragmatic Abilities and Cognitive Substrates (APACS) in-person test, the new APACS Brief Remote test includes 18 original items assessing discourse and non-literal language understanding in expressive and receptive modalities. The test lasts approximately 10 minutes and is suited for video-conference administration. Results from a sample of 141 healthy participants indicate that both reliability (internal consistency, test-retest, and inter-rater) and validity (measured via APACS and verbal and cognitive tests) of the APACS Brief Remote are adequate. The alternate form of the test can be considered as equivalent. Among demographic variables, the analysis highlighted especially the role of age. Perceived experience with the videoconference administration was positive, supporting the feasibility of APACS Brief Remote across ages and educational levels. The APACS Brief Remote represents a useful tool to promote evidence-based tele-assessment practices in the domain of pragmatics, for instance for online follow-up assessment, in a vast range of clinical conditions that might cause communicative difficulties.
“…It is indeed possible that in clinical samples with pragmatic difficulties the correlation between the two tests would be based on a wider range of scores, hence tighter and stronger in magnitude. Finally, it is to note that the items included in the APACS Brief Remote test are not a subset of the items of the APACS test, but rather a new set of items, administered in another modality: the obtained correlation value aligns with coefficients reported in the literature between another full-length battery of pragmatics, namely the MEC, and its shorter version based on a novel set of items, namely MEC B (across MEC scales, .05 < ρ < .69, Casarin et al, 2020). However, we cannot rule out that the APACS Brief Remote and the APACS test have a different granularity in assessing pragmatics, in terms of fine description of the pragmatic profile.…”
Section: Discussionsupporting
confidence: 77%
“…Several tools for rapid cognitive screening (i.e., within a 15-minute duration) are available to use when extensive neuropsychological evaluation is not necessary, and they are considered an optimal choice to monitor the course of an illness or treatment outcome (Mondini et al, 2022). Yet there are no tools for rapid pragmatic assessment (the MEC Brief -available only for Brazilian Portuguese -has still a considerable duration: 25-40 minutes Casarin et al, 2020), and the available ones can hardly be incorporated into routine assessment of patients with suspected pragmatic language disorder. A further issue is represented by the poor availability of tests with alternate forms.…”
Healthcare services require rapid assessment tools, as well as the possibility of using them flexibly in different contexts, such as those experienced during the COVID-19 pandemic, that favor remote interaction over traditional care. These needs become especially challenging when assessing language and communication skills, for which few tools exist. This work aimed to develop and evaluate the psychometric properties of a novel test for the rapid and tele-assessment of pragmatic skills in Italian-speaking individuals, including an alternate form to allow for monitoring and follow-up. Inspired by Gricean pragmatics and modelled after the already validated Assessment of Pragmatic Abilities and Cognitive Substrates (APACS) in-person test, the new APACS Brief Remote test includes 18 original items assessing discourse and non-literal language understanding in expressive and receptive modalities. The test lasts approximately 10 minutes and is suited for video-conference administration. Results from a sample of 141 healthy participants indicate that both reliability (internal consistency, test-retest, and inter-rater) and validity (measured via APACS and verbal and cognitive tests) of the APACS Brief Remote are adequate. The alternate form of the test can be considered as equivalent. Among demographic variables, the analysis highlighted especially the role of age. Perceived experience with the videoconference administration was positive, supporting the feasibility of APACS Brief Remote across ages and educational levels. The APACS Brief Remote represents a useful tool to promote evidence-based tele-assessment practices in the domain of pragmatics, for instance for online follow-up assessment, in a vast range of clinical conditions that might cause communicative difficulties.
“…Extensive aphasia assessment batteries may be too tiring for patients with complex clinical conditions (Casarin et al, 2020). After an acute stroke, for instance, many patients are unable to undergo prolonged evaluations (Marshall & Wright, 2007).…”
Background
Evaluating patients in the acute phase of brain damage allows for the early detection of cognitive and linguistic impairments and the implementation of more effective interventions. However, few cross-cultural instruments are available for the bedside assessment of language abilities. The aim of this study was to develop a brief assessment instrument and evaluate its content validity.
Methods
Stimuli for the new assessment instrument were selected from the M1-Alpha and MTL-BR batteries (Stage 1). Sixty-five images were redesigned and analyzed by non-expert judges (Stage 2). This was followed by the analysis of expert judges (Stage 3), where nine speech pathologists with doctoral training and experience in aphasiology and/or linguistics evaluated the images, words, nonwords, and phrases for inclusion in the instrument. Two pilot studies (Stage 4) were then conducted in order to identify any remaining errors in the instrument and scoring instructions.
Results
Sixty of the 65 figures examined by the judges achieved inter-rater agreement rates of at least 80%. Modifications were suggested to 22 images, which were therefore reanalyzed by the judges, who reached high levels of inter-rater agreement (AC1 = 0.98 [CI = 0.96–1]). New types of stimuli such as nonwords and irregular words were also inserted in the Brief Battery and favorably evaluated by the expert judges. Optional tasks were also developed for specific diagnostic situations. After the correction of errors detected in Stage 4, the final version of the instrument was obtained.
Conclusion
This study confirmed the content validity of the Brief MTL-BR Battery. The method used in this investigation was effective and can be used in future studies to develop brief instruments based on preexisting assessment batteries.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.