These results provide group-level evidence for the efficacy of SFA as well as preliminary estimates of how much naming performance benefit is engendered by varying dosages of SFA. The results also provide promising and previously unobserved evidence of potential person-level predictors of SFA treatment response.
Purpose Aphasia is a language disorder caused by acquired brain injury, which generally involves difficulty naming objects. Naming ability is assessed by measuring picture naming, and models of naming performance have mostly focused on accuracy and excluded valuable response time (RT) information. Previous approaches have therefore ignored the issue of processing efficiency, defined here in terms of optimal RT cutoff, that is, the shortest deadline at which individual people with aphasia produce their best possible naming accuracy performance. The goals of this study were therefore to (a) develop a novel model of aphasia picture naming that could accurately account for RT distributions across response types; (b) use this model to estimate the optimal RT cutoff for individual people with aphasia; and (c) explore the relationships between optimal RT cutoff, accuracy, naming ability, and aphasia severity. Method A total of 4,021 naming trials across 10 people with aphasia were scored for accuracy and RT onset. Data were fit using a novel ex-Gaussian multinomial RT model, which was then used to characterize individual optimal RT cutoffs. Results Overall, the model fitted the empirical data well and provided reliable individual estimates of optimal RT cutoff in picture naming. Optimal cutoffs ranged between approximately 5 and 10 s, which has important implications for assessment and treatment. There was no direct relationship between aphasia severity, naming RT, and optimal RT cutoff. Conclusion The multinomial ex-Gaussian modeling approach appears to be a promising and straightforward way to estimate optimal RT cutoffs in picture naming in aphasia. Limitations and future directions are discussed.
Purpose Aphasia intervention research aims to improve communication and quality of life outcomes for people with aphasia. However, few studies have evaluated the translation and implementation of evidence-based aphasia interventions to clinical practice. Treatment dosage may be difficult to translate to clinical settings, and a mismatch between dosage in research and clinical practice threatens to attenuate intervention effectiveness. The purpose of this study is to quantify a potential research–practice dosage gap in outpatient aphasia rehabilitation. Method This study utilized a two-part approach. First, we estimated clinical treatment dosage in an episode of care (i.e., treatment provided from outpatient assessment to discharge) via utilization in a regional provider in the United States. Second, we undertook a scoping review of aphasia interventions published from 2009 to 2019 to estimate the typical dosage used in the current aphasia literature. Results Outpatient clinical episodes of care included a median of 10 treatment sessions and a mean of 14.8 sessions (interquartile range: 5–20 sessions). Sessions occurred 1–2 times a week over 4–14 weeks. The median total hours of treatment was 7.5 hr (interquartile range: 3.75–15 hr). In contrast, published interventions administered a greater treatment dosage, consisting of a median of 20 hr of treatment (interquartile range: 12–30 hr) over the course of 15 sessions (interquartile range: 10–24 sessions) approximately 3 times per week. Conclusions Results demonstrate a meaningful research–practice dosage gap, particularly in total treatment hours and weekly treatment intensity. This gap highlights the potential for attenuation of effectiveness from research to outpatient settings. Future translational research should consider clinical dosage constraints and take steps to facilitate intervention implementation, particularly with regard to dosage. Conversely, health care advocacy and continued development of alternative delivery methods are necessary for the successful implementation of treatments with dosage that is incompatible with current clinical contexts. Pragmatic, implementation-focused trials are recommended to evaluate and optimize treatment effectiveness in outpatient clinical settings. Supplemental Material https://doi.org/10.23641/asha.15161568
Purpose: The external validity of aphasia treatment research relies on diverse and representative participants. The purposes of this study were (a) to examine whether reporting of patient-reported age, sex, and race/ethnicity has improved since Ellis (2009) and (b) to evaluate whether these demographic variables were consistent with population-level estimates of stroke survivor demographics in the United States. Method: A scoping review examined U.S.-based aphasia treatment studies published between 2009 and 2019 and characterized the percentage of studies reporting age, sex, and race/ethnicity. Summary statistics for these variables were calculated and compared statistically with a population-based study of stroke survivors. Results: It was found out that 97.1% of studies reported age, 93.5% reported sex, and 28.1% reported race and/or ethnicity. Within reporting studies, participant mean age was 58.04 years, 61.6% of participants were men, and 38.4% were women; 86.5% of participants were White, 11.0% were Black, 2.0% were Hispanic/Latino, and 0.5% fell in other racial categories. All three variables were statistically different from the study of Kissela et al. (2012). Discussion: Despite being highlighted as an issue by Ellis (2009), less than 30% of recent aphasia treatment studies reported race or ethnicity, and participants do not appear to be demographically representative compared with estimates of stroke survivors living in the United States. These issues may negatively impact the ecological validity of aphasia treatment research. Aphasia researchers should more consistently report participant race and ethnicity and follow current guidelines for increasing the demographic representation of women and minorities.
This mini review is aimed at a clinician-scientist seeking to understand the role of oscillations in neural processing and their functional relevance in speech and music perception. We present an overview of neural oscillations, methods used to study them, and their functional relevance with respect to music processing, aging, hearing loss, and disorders affecting speech and language. We first review the oscillatory frequency bands and their associations with speech and music processing. Next we describe commonly used metrics for quantifying neural oscillations, briefly touching upon the still-debated mechanisms underpinning oscillatory alignment. Following this, we highlight key findings from research on neural oscillations in speech and music perception, as well as contributions of this work to our understanding of disordered perception in clinical populations. Finally, we conclude with a look toward the future of oscillatory research in speech and music perception, including promising methods and potential avenues for future work. We note that the intention of this mini review is not to systematically review all literature on cortical tracking of speech and music. Rather, we seek to provide the clinician-scientist with foundational information that can be used to evaluate and design research studies targeting the functional role of oscillations in speech and music processing in typical and clinical populations.
Purpose The purpose of this study was to develop and pilot a novel treatment framework called BEARS (Balancing Effort, Accuracy, and Response Speed). People with aphasia (PWA) have been shown to maladaptively balance speed and accuracy during language tasks. BEARS is designed to train PWA to balance speed–accuracy trade-offs and improve system calibration (i.e., to adaptively match system use with its current capability), which was hypothesized to improve treatment outcomes by maximizing retrieval practice and minimizing error learning. In this study, BEARS was applied in the context of a semantically oriented anomia treatment based on semantic feature verification (SFV). Method Nine PWA received 25 hr of treatment in a multiple-baseline single-case series design. BEARS + SFV combined computer-based SFV with clinician-provided BEARS metacognitive training. Naming probe accuracy, efficiency, and proportion of “pass” responses on inaccurate trials were analyzed using Bayesian generalized linear mixed-effects models. Generalization to discourse and correlations between practice efficiency and treatment outcomes were also assessed. Results Participants improved on naming probe accuracy and efficiency of treated and untreated items, although untreated item gains could not be distinguished from the effects of repeated exposure. There were no improvements on discourse performance, but participants demonstrated improved system calibration based on their performance on inaccurate treatment trials, with an increasing proportion of “pass” responses compared to paraphasia or timeout nonresponses. In addition, levels of practice efficiency during treatment were positively correlated with treatment outcomes, suggesting that improved practice efficiency promoted greater treatment generalization and improved naming efficiency. Conclusions BEARS is a promising, theoretically motivated treatment framework for addressing the interplay between effort, accuracy, and processing speed in aphasia. This study establishes the feasibility of BEARS + SFV and provides preliminary evidence for its efficacy. This study highlights the importance of considering processing efficiency in anomia treatment, in addition to performance accuracy. Supplemental Material https://doi.org/10.23641/asha.14935812
Purpose: Small- N studies are the dominant study design supporting evidence-based interventions in communication science and disorders, including treatments for aphasia and related disorders. However, there is little guidance for conducting reproducible analyses or selecting appropriate effect sizes in small- N studies, which has implications for scientific review, rigor, and replication. This tutorial aims to (a) demonstrate how to conduct reproducible analyses using effect sizes common to research in aphasia and related disorders and (b) provide a conceptual discussion to improve the reader's understanding of these effect sizes. Method: We provide a tutorial on reproducible analyses of small- N designs in the statistical programming language R using published data from Wambaugh et al. (2017). In addition, we discuss the strengths, weaknesses, reporting requirements, and impact of experimental design decisions on effect sizes common to this body of research. Results: Reproducible code demonstrates implementation and comparison of within-case standardized mean difference, proportion of maximal gain, tau-U, and frequentist and Bayesian mixed-effects models. Data, code, and an interactive web application are available as a resource for researchers, clinicians, and students. Conclusions: Pursuing reproducible research is key to promoting transparency in small- N treatment research. Researchers and clinicians must understand the properties of common effect size measures to make informed decisions in order to select ideal effect size measures and act as informed consumers of small- N studies. Together, a commitment to reproducibility and a keen understanding of effect sizes can improve the scientific rigor and synthesis of the evidence supporting clinical services in aphasiology and in communication sciences and disorders more broadly. Supplemental Material and Open Science Form: https://doi.org/10.23641/asha.21699476
Purpose This meta-analysis synthesizes published studies using “treatment of underlying forms” (TUF) for sentence-level deficits in people with aphasia (PWA). The study aims were to examine group-level evidence for TUF efficacy, to characterize the effects of treatment-related variables (sentence structural family and complexity; treatment dose) in relation to the Complexity Account of Treatment Efficacy (CATE) hypothesis, and to examine the effects of person-level variables (aphasia severity, sentence comprehension impairment, and time postonset of aphasia) on TUF response. Method Data from 13 single-subject, multiple-baseline TUF studies, including 46 PWA, were analyzed. Bayesian generalized linear mixed-effects interrupted time series models were used to assess the effect of treatment-related variables on probe accuracy during baseline and treatment. The moderating influence of person-level variables on TUF response was also investigated. Results The results provide group-level evidence for TUF efficacy demonstrating increased probe accuracy during treatment compared with baseline phases. Greater amounts of TUF were associated with larger increases in accuracy, with greater gains for treated than untreated sentences. The findings revealed generalization effects for sentences that were of the same family but less complex than treated sentences. Aphasia severity may moderate TUF response, with people with milder aphasia demonstrating greater gains compared with people with more severe aphasia. Sentence comprehension performance did not moderate TUF response. Greater time postonset of aphasia was associated with smaller improvements for treated sentences but not for untreated sentences. Conclusions Our results provide generalizable group-level evidence of TUF efficacy. Treatment and generalization responses were consistent with the CATE hypothesis. Model results also identified person-level moderators of TUF (aphasia severity, time postonset of aphasia) and preliminary estimates of the effects of varying amounts of TUF for treated and untreated sentences. Taken together, these findings add to the TUF evidence and may guide future TUF treatment–candidate selection. Supplemental Material https://doi.org/10.23641/asha.16828630
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.