<b><i>Introduction:</i></b> Difficulty swallowing (dysphagia) occurs frequently in patients with neurological disorders and can lead to aspiration, choking, and malnutrition. Dysphagia is typically diagnosed using costly, invasive imaging procedures or subjective, qualitative bedside examinations. Wearable sensors are a promising alternative to noninvasively and objectively measure physiological signals relevant to swallowing. An ongoing challenge with this approach is consolidating these complex signals into sensitive, clinically meaningful metrics of swallowing performance. To address this gap, we propose 2 novel, digital monitoring tools to evaluate swallows using wearable sensor data and machine learning. <b><i>Methods:</i></b> Biometric swallowing and respiration signals from wearable, mechano-acoustic sensors were compared between patients with poststroke dysphagia and nondysphagic controls while swallowing foods and liquids of different consistencies, in accordance with the Mann Assessment of Swallowing Ability (MASA). Two machine learning approaches were developed to (1) classify the severity of impairment for each swallow, with model confidence ratings for transparent clinical decision support, and (2) compute a similarity measure of each swallow to nondysphagic performance. Task-specific models were trained using swallow kinematics and respiratory features from 505 swallows (321 from patients and 184 from controls). <b><i>Results:</i></b> These models provide sensitive metrics to gauge impairment on a per-swallow basis. Both approaches demonstrate intrasubject swallow variability and patient-specific changes which were not captured by the MASA alone. Sensor measures encoding respiratory-swallow coordination were important features relating to dysphagia presence and severity. Puree swallows exhibited greater differences from controls than saliva swallows or liquid sips (<i>p</i> < 0.037). <b><i>Discussion:</i></b> Developing interpretable tools is critical to optimize the clinical utility of novel, sensor-based measurement techniques. The proof-of-concept models proposed here provide concrete, communicable evidence to track dysphagia recovery over time. With refined training schemes and real-world validation, these tools can be deployed to automatically measure and monitor swallowing in the clinic and community for patients across the impairment spectrum.
Purpose In today's digital world, text messaging is one of the most widely used ways that people stay connected. Although it is reported that people with aphasia experience difficulties with texting, little information is available about how they actually do text. This study reports texting behaviors, such as the number and type of messages sent and contacts individuals with aphasia have. The relationships between texting behaviors and aphasia severity, including writing impairments, and social connectedness are explored. Method Twenty participants were sampled from an ongoing randomized clinical trial investigating an electronic writing treatment for aphasia (Clinical Trials Identifier: NCT03773419). Participants provided consent for researchers to view and analyze texts sent and received over a 7-day period immediately prior to the assessment. Participants' text messages were recorded, transcribed verbatim, and coded. Results Over the 7-day period, the number of contacts with whom participants texted ranged from one to 18. The mean number of text messages exchanged was 40.3 ( SD = 48.24), with participants sending an average of 15.4 ( SD = 23.45) texts and receiving an average of 24.9 ( SD = 29.44) texts. Participants varied in the types of texts sent; some had a larger proportion of initiated texts, while others drafted more responses, either simple or elaborative in nature. There was no correlation between the total number of texting exchanges and the Western Aphasia Battery–Revised Aphasia Quotient ( r s = .13, p = . 29) or the Western Aphasia Battery–Revised Writing subtest ( r s = .05, p = .42). There was also no correlation between the total number of texting exchanges and scores on measures of social connectedness. Conclusions Texting behaviors of individuals with aphasia are widely variable. Demographics, severity of aphasia and writing, and social connectedness may not predict texting behaviors. Therefore, it is clinically important to explore the unique texting abilities and preferences of each individual to meet their communication and social participation goals. Supplemental Material https://doi.org/10.23641/asha.14669664
Purpose: It is known that interpreter-mediated aphasia assessments may not provide the linguistic information that speech-language pathologists (SLPs) need to provide accurate diagnoses and determine treatment goals. The purpose of our study was to understand the perceptions of SLPs and interpreters who collaborate in a medical setting and to develop a checklist to categorize and quantify the errors interpreters make. Interpreter training may lead to unintentional errors that impact the information the SLP gains from the assessment session. Method: In Phase 1 of the study, 38 hospital SLPs and 26 interpreters responded to survey questions about their experiences working with the other discipline. In Phase 2, eight Spanish-speaking interpreters and two Spanish-speaking participants with fluent aphasia took part in a standardized interpreter-mediated aphasia assessment. A bilingual SLP and a Spanish-speaking interpreter analyzed and coded the assessments for errors in the interpreters’ behaviors. Results: Results from the survey demonstrated that both SLPs and interpreters would like the interpreters to have more education regarding the diagnosis of aphasia and an understanding of the SLP's goals during an aphasia assessment. A lack of time was considered the primary hindrance to educating interpreters during an evaluation session. The checklist included interpreter behaviors that could significantly impact the SLP’s ability to diagnose aphasia: omission of speech/language information, meaning errors, and cueing. Positive behaviors noted were calling attention to patient error and pointing out potential confusing items. Conclusions: Education for both disciplines will enhance the accuracy of interpreter-mediated aphasia assessments. A checklist tool with specific examples of errors may be useful in educating not only experienced interpreters and SLPs but also students in both disciplines.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.