Despite widespread availability of ultrasound and a need for personalised muscle diagnosis (neck/back pain-injury, work related disorder, myopathies, neuropathies), robust, online segmentation of muscles within complex groups remains unsolved by existing methods. For example, Cervical Dystonia (CD) is a prevalent neurological condition causing painful spasticity in one or multiple muscles in the cervical muscle system. Clinicians currently have no method for targeting/monitoring treatment of deep muscles. Automated methods of muscle segmentation would enable clinicians to study, target, and monitor the deep cervical muscles via ultrasound. We have developed a method for segmenting five bilateral cervical muscles and the spine via ultrasound alone, in real-time. Magnetic Resonance Imaging (MRI) and ultrasound data were collected from 22 participants (age: 29.0±6.6, male: 12). To acquire ultrasound muscle segment labels, a novel multimodal registration method was developed, involving MRI image annotation, and shape registration to MRI-matched ultrasound images, via approximation of the tissue deformation. We then applied polynomial regression to transform our annotations and textures into a mean space, before using shape statistics to generate a texture-to-shape dictionary. For segmentation, test images were compared to dictionary textures giving an initial segmentation, and then we used a customized Active Shape Model to refine the fit. Using ultrasound alone, on unseen participants, our technique currently segments a single image in [Formula: see text] to over 86% accuracy (Jaccard index). We propose this approach is applicable generally to segment, extrapolate and visualise deep muscle structure, and analyse statistical features online.
Objectives: To automate online segmentation of cervical muscles from transverse ultrasound (US) images of the human neck during functional head movement. To extend ground-truth labelling methodology beyond dependence upon MRI imaging of static head positions required for application to participants with involuntary movement disorders. Method: We collected sustained sequences (> 3 minutes) of US images of human posterior cervical neck muscles at 25 fps from 28 healthy adults, performing visually-guided pitch and yaw head motions. We sampled 1,100 frames (approx. 40 per participant) spanning the experimental range of head motion. We manually labelled all 1,100 US images and trained deconvolutional neural networks (DCNN) with a spatial SoftMax regression layer to classify every pixel in the full resolution (525x491) US images, as one of 14 classes (10 muscles, ligamentum nuchae, vertebra, skin, background). We investigated ‘MaxOut’ and Exponential Linear unit (ELU) transfer functions and compared with our previous benchmark (analytical shape modelling). Results: These DCNNs showed higher Jaccard Index (53.2%) and lower Hausdorff Distance (5.7 mm) than the previous benchmark (40.5%, 6.2 mm). SoftMax Confidence corresponded with correct classification. ‘MaxOut’ outperformed ELU marginally. Conclusion: The DCNN architecture accommodates challenging images and imperfect manual labels. The SoftMax layer gives user feedback of likely correct classification. The ‘MaxOut’ transfer function benefits from near-linear operation, compatibility with deconvolution operations and the dropout regulariser. Significance: This methodology for labelling ground-truth and training automated labelling networks is applicable for dynamic segmentation of moving muscles and for participants with involuntary movement disorders who cannot remain still.
This paper presents an investigation into the feasibility of using deep learning methods for developing arbitrary full spatial resolution regression analysis of B-mode ultrasound images of human skeletal muscle. In this study, we focus on full spatial analysis of muscle fibre orientation, since there is an existing body of work with which to compare results. Previous attempts to automatically estimate fibre orientation from ultrasound are not adequate, often requiring manual region selection, feature engineering, providing low-resolution estimations (one angle per muscle) and deep muscles are often not attempted. We build upon our previous work in which automatic segmentation was used with plain convolutional neural network (CNN) and deep residual convolutional network (ResNet) architectures, to predict a low-resolution map of fibre orientation in extracted muscle regions. Here, we use deconvolutions and max-unpooling (DCNN) to regularise and improve predicted fibre orientation maps for the entire image, including deep muscles, removing the need for automatic segmentation and we compare our results with the CNN and ResNet, as well as a previously established feature engineering method, on the same task. Dynamic ultrasound images sequences of the calf muscles were acquired (25 Hz) from 8 healthy volunteers (4 male, ages: 25-36, median 30). A combination of expert annotation and interpolation/extrapolation provided labels of regional fibre orientation for each image. Neural networks (CNN, ResNet, DCNN) were then trained both with and without dropout using leave one out cross-validation. Our results demonstrated robust estimation of full spatial fibre orientation within approximately 6 • error, which was an improvement on previous methods.
This paper concerns the fully automatic direct in vivo measurement of active and passive dynamic skeletal muscle states using ultrasound imaging. Despite the long standing medical need (myopathies, neuropathies, pain, injury, ageing), currently technology (electromyography, dynamometry, shear wave imaging) provides no general, non-invasive method for online estimation of skeletal intramuscular states. Ultrasound provides a technology in which static and dynamic muscle states can be observed non-invasively, yet current computational image understanding approaches are inadequate. We propose a new approach in which deep learning methods are used for understanding the content of ultrasound images of muscle in terms of its measured state. Ultrasound data synchronized with electromyography of the calf muscles, with measures of joint torque/angle were recorded from 19 healthy participants (6 female, ages: 30 ± 7.7). A segmentation algorithm previously developed by our group was applied to extract a region of interest of the medial gastrocnemius. Then a deep convolutional neural network was trained to predict the measured states (joint angle/torque, electromyography) directly from the segmented images. Results revealed for the first time that active and passive muscle states can be measured directly from standard b-mode ultrasound images, accurately predicting for a held out test participant changes in the joint angle, electromyography, and torque with as little error as 0.022°, 0.0001V, 0.256Nm (root mean square error) respectively.
To provide objective visualization and pattern analysis of neck muscle boundaries to inform and monitor treatment of cervical dystonia. Methods: We recorded transverse cervical ultrasound (US) images and whole-body motion analysis of sixty-one standing participants (35 cervical dystonia, 26 age matched controls). We manually annotated 3,272 US images sampling posture and the functional range of pitch, yaw, and roll head movements. Using previously validated methods, we used 60-fold cross validation to train, validate and test a deep neural network (U-net) to classify pixels to 13 categories (five paired neck muscles, skin, ligamentum nuchae, vertebra). For all participants for their normal standing posture, we segmented US images and classified condition (Dystonia/Control), sex and age (higher/lower) from segment boundaries. We performed an explanatory, visualization analysis of dystonia muscle-boundaries. Results: For all segments, agreement with manual labels was Dice Coefficient (64 ± 21%) and Hausdorff Distance (5.7 ± 4 mm). For deep muscle layers, boundaries predicted central injection sites with average precision 94 ± 3%. Using leave-one-out cross-validation, a support-vectormachine classified condition, sex, and age from predicted muscle boundaries at accuracy 70.5%, 67.2%, 52.4%
Objective: To test automated in vivo estimation of active and passive skeletal muscle states using ultrasonic imaging. Background: Current technology (electromyography, dynamometry, shear wave imaging) provides no general, non-invasive method for online estimation of skeletal intramuscular states. Ultrasound (US) allows non-invasive imaging of muscle, yet current computational approaches have never achieved simultaneous extraction nor generalisation of independently varying, active and passive states. We use deep learning to investigate the generalizable content of 2D US muscle images. Method: US data synchronized with electromyography of the calf muscles, with measures of joint moment/angle were recorded from 32 healthy participants (7 female, ages: 27.5, 19-65). We extracted a region of interest of medial gastrocnemius and soleus using our prior developed accurate segmentation algorithm. From the segmented images, a deep convolutional neural network was trained to predict three absolute, drift-free, components of the neurobiomechanical state (activity, joint angle, joint moment) during experimentally designed, simultaneous, independent variation of passive (joint angle) and active (electromyography) inputs. Results: For all 32 held-out participants (16-fold cross-validation) the ankle joint angle, electromyography, and joint moment were estimated to accuracy 55±8%, 57±11%, and 46±9% respectively. Significance: With 2D US imaging, deep neural networks can encode in generalizable form, the activity-length-tension state relationship of muscle. Observation only, low power, 2D US imaging can provide a new category of technology for non-invasive estimation of neural output, length and tension in skeletal muscle. This proof of principle has value for personalised muscle diagnosis in pain, injury, neurological conditions, neuropathies, myopathies and ageing.
While individual muscle function is known, the sensory and motor value of muscles within the whole-body sensorimotor network is complicated. Specifically, the relationship between neck muscle action and distal muscle synergies is unknown. This work demonstrates a causal relationship between regulation of the neck muscles and global motor control. Studying violinists performing unskilled and skilled manual tasks, we provided ultrasound feedback of the neck muscles with instruction to minimize neck muscle change during task performance and observed the indirect effect on whole-body movement. Analysis of ultrasound, kinematic, electromyographic and electrodermal recordings showed that proactive inhibition targeted at neck muscles had an indirect global effect reducing the cost of movement, reducing complex involuntary, task-irrelevant movement patterns and improving balance. This effect was distinct from the effect of gaze alignment which increased physiological cost and reduced laboratory-referenced movement. Neck muscle inhibition imposes a proximal constraint on the global motor plan, forcing a change in highly automated sensorimotor control. The proximal location ensures global influence. The criterion, inhibition of unnecessary action, ensures reduced cost while facilitating task-relevant variation. This mechanism regulates global motor function and facilitates reinforcement learning to change engrained, maladapted sensorimotor control associated with chronic pain, injury and performance limitation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.