In this article, we present a technical framework aimed at facilitating musical biofeedback research in poststroke movement rehabilitation. The framework comprises wireless wearable inertial sensors and software built with inexpensive and opensource tools. The software enables layered and adjustable music synthesis and has a generic movement-music mapping module.Using this, we designed digital musical interactions for balance, sit-to-stand, and gait training. Preliminary trials with subacute stroke patients indicated that the interactions were clinically feasible. Expert interviews with a focus group of clinicians were also conducted, where these interactions were deemed as meaningful and relevant to clinical protocols, with comprehensible feedback (albeit sometimes unpleasant or disturbing) for several patient types. We carried out system benchmarking, finding that the system has sufficiently short loop delays (∼90 ms) and a healthy sensing range (>9 m) and is computationally efficient (11.1% peak CPU usage on a quad-core processor). Future studies will focus on using this framework with patients to both develop the interactions further and measure their effects on motor learning, performance retention, and psychological factors to help gauge their true clinical potential.
Interactive sonification has increasingly shown potential as a means of biofeedback to aid motor learning in movement rehabilitation. However, this application domain faces challenges related to the design of meaningful, task-relevant mappings as well as aesthetic qualities of the sonic feedback. A recent mapping design approach is that of using conceptual metaphors based on image schemata and embodied music cognition. In this work, we developed a framework to facilitate the design and real-time exploration of rehabilitation-tailored mappings rooted in a specific set of music-based conceptual metaphors. The outcome was a prototype system integrating wireless inertial measurement, flexible real-time mapping control and physical modelling-based musical sonification. We focus on the technical details of the system, and demonstrate mappings that we created through it for two exercises. These will be iteratively honed and evaluated in upcoming user-centered studies. We believe our framework can be a useful tool in musical sonification design for motor learning applications.
Interactive sonification of biomechanical quantities is gaining relevance as a motor learning aid in movement rehabilitation, as well as a monitoring tool. However, existing gaps in sonification research (issues related to meaning, aesthetics, and clinical effects) have prevented its widespread recognition and adoption in such applications. The incorporation of embodied principles and musical structures in sonification design has gradually become popular, particularly in applications related to human movement. In this study, we propose a general sonification model for the sit-to-stand (STS) transfer, an important activity of daily living. The model contains a fixed component independent of the use-case, which represents the rising motion of the body as an ascending melody using the physical model of a flute. In addition, a flexible component concurrently sonifies STS features of clinical interest in a particular rehabilitative/monitoring situation. Here, we chose to represent shank angular jerk and movement stoppages (freezes), through perceptually salient pitch modulations and bell sounds. We outline the details of our technical implementation of the model. We evaluated the model by means of a listening test experiment with 25 healthy participants, who were asked to identify six normal and simulated impaired STS patterns from sonified versions containing various combinations of the constituent mappings of the model. Overall, we found that the participants were able to classify the patterns accurately (86.67 ± 14.69% correct responses with the full model, 71.56% overall), confidently (64.95 ± 16.52% self-reported rating), and in a timely manner (response time: 4.28 ± 1.52 s). The amount of sonified kinematic information significantly impacted classification accuracy. The six STS patterns were also classified with significantly different accuracy depending on their kinematic characteristics. Learning effects were seen in the form of increased accuracy and confidence with repeated exposure to the sound sequences. We found no significant accuracy differences based on the participants' level of music training. Overall, we see our model as a concrete conceptual and technical starting point for STS sonification design catering to rehabilitative and clinical monitoring applications.
This study investigates the relationship between auditory pulse clarity and sensorimotor synchronization performance, along with the influence of musical training. 29 participants walked in place to looped drum samples with varying degrees of pulse clarity, which were generated by adding artificial reverberation and measured through fluctuation spectrum peakiness. Experimental results showed that reducing auditory pulse clarity affected phase matching through significantly higher means and standard deviations in asynchrony across musical sophistication groups. Referent period matching ability was also degraded, and non-musicians were impacted more than musicians. Subjective ratings of required active concentration also increased with decreasing pulse clarity. These findings point to the importance of clear and distinct pulses to timing performance in synchronization tasks such as music and dance.
Auditory feedback has earlier been explored as a tool to enhance patient awareness of gait kinematics during rehabilitation. In this study, we devised and tested a novel set of concurrent feedback paradigms on swing phase kinematics in hemiparetic gait training. We adopted a user-centered design approach, where kinematic data recorded from 15 hemiparetic patients was used to design three feedback algorithms (wading sounds, abstract, musical) based on filtered gyroscopic data from four inexpensive wireless inertial units. The algorithms were tested (hands-on) by a focus group of five physiotherapists. They recommended that the abstract and musical algorithms be discarded due to sound quality and informational ambiguity. After modifying the wading algorithm (as per their feedback), we conducted a feasibility test involving nine hemiparetic patients and seven physiotherapists, where variants of the algorithm were applied to a conventional overground training session. Most patients found the feedback meaningful, enjoyable to use, natural-sounding, and tolerable for the typical training duration. Three patients exhibited immediate improvements in gait quality when the feedback was applied. However, minor gait asymmetries were found to be difficult to perceive in the feedback, and there was variability in receptiveness and motor change among the patients. We believe that our findings can advance current research in inertial sensor-based auditory feedback for motor learning enhancement during neurorehabilitation.
Auditory guidance conveying positional information through concurrent variations in properties of synthesized sound has previously been investigated. Auditory guidance may be more effective if multidimensional tasks are divided into unidimensional tasks where the user sequentially tackles each dimension and sound property. User performance may also depend on the coordinate system used for providing guidance. We compared concurrent and sequential guidance presentations in Cartesian and polar coordinate systems in a computer-based 2-D target-finding experiment with 15 participants. Sequential guidance was superior regarding completion time and number of interruptions with less cognitive burden than concurrent guidance. Participants were slower with the polar coordinate system than the Cartesian. These findings can contribute to the development of more efficacious guidance systems.CCS Concepts: • Human-centered computing → Empirical studies in interaction design; Interaction design theory, concepts and paradigms; • Applied computing → Sound and music computing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.