OBJECTIVE To determine if patients with Post-Polio Syndrome (PPS) show spinal cord gray matter (SCGM) atrophy and to assess associations between SCGM atrophy, muscle strength and patient-reported functional decline. METHODS Twenty patients diagnosed with PPS (March of Dimes Criteria) and twenty age-and sex-matched healthy controls (HC) underwent 3T axial 2D-rAMIRA MRimaging at the intervertebral disc levels C2/C3-C6/C7, T9/T10 and the lumbar enlargement level (T max ) (0.5x0.5mm 2 in-plane resolution). SCGM areas were segmented manually by two independent raters. Muscle strength, self-reported fatigue, depression, pain measures were assessed. RESULTS PPS patients showed significantly and preferentially reduced SCGM areas at C2/C3 (p = 0.048), C3/C4 (p=0.001), C4/C5 (p<0.001), C5/C6 (p=0.004) and T max (p=0.041) compared to HC. SCGM areas were significantly associated with muscle strength in corresponding myotomes even after adjustment for fatigue, pain, depression. SCGM area T max together with age and sex explained 68% of ankle dorsiflexion strength variance. No associations were found with age at or time since infection. Patients reporting PPS-related decline in arm function showed significant cervical SCGM atrophy compared to stable patients adjusted for initial disease severity. CONCLUSIONS Patients with PPS show significant SCGM atrophy that correlates with muscle strength and is associated with PPS-related functional decline. Our findings suggest a secondary neurodegenerative process underlying SCGM atrophy in PPS that is not explained by aging or residuals of the initial infection alone. Confirmation by longitudinal studies is needed. The described imaging methodology is promising for developing novel imaging surrogates for SCGM diseases. ClinicalTrials.gov (NCT03561623).
Purpose: Automated distinct bone segmentation has many applications in planning and navigation tasks. 3D U-Nets have previously been used to segment distinct bones in the upper body, but their performance is not yet optimal. Their most substantial source of error lies not in confusing one bone for another, but in confusing background with bone-tissue. Methods: In this work, we propose binary-prediction-enhanced multi-class (BEM) inference, which takes into account an additional binary background/bone-tissue prediction, to improve the multi-class distinct bone segmentation. We evaluate the method using different ways of obtaining the binary prediction, contrasting a two-stage approach to four networks with two segmentation heads. We perform our experiments on two datasets: An in-house dataset comprising 16 upper-body CT scans with voxelwise labelling into 126 distinct classes, and a public dataset containing 50 synthetic CT scans, with 41 different classes. Results: The most successful network with two segmentation heads achieves a class-median Dice coefficient of 0.85 on cross-validation with the upper-body CT dataset. These results outperform both our previously published 3D U-Net baseline with standard inference, and previously reported results from other groups. On the synthetic dataset, we also obtain improved results when using BEM-inference. Conclusion: Using a binary bone-tissue/background prediction as guidance during inference improves distinct bone segmentation from upper-body CT scans and from the synthetic dataset. The results are robust to multiple ways of obtaining the bone-tissue segmentation and hold for the two-stage approach as well as for networks with two segmentation heads.
We acknowledge the funding of the Werner Siemens Foundation through the MIRCALE (Minimally Invasive Robot-Assisted Computer-guided LaserosteotomE) project. ABSTRACT Today's mechanical tools for bone cutting (osteotomy) lead to mechanical trauma that prolong the healing process. Medical device manufacturers continuously strive to improve their tools to minimize such trauma. One example of such a new tool and procedure is minimally invasive surgery with laser as the cutting element. This setup allows for tissue ablation using laser light instead of mechanical tools, which reduces the post-surgery healing time. During surgery, a reliable feedback system is crucial to avoid collateral damage to the surrounding tissues. Therefore, we propose a tissue classification method that analyzes the acoustic waves produced during laser ablation and show its applicability in an ex-vivo experiment. The ablation process with a microsecond pulsed Erbium-doped Yttrium Aluminium Garnet (Er:YAG) laser produces acoustic waves that we captured with an air-coupled transducer. Consequently, we used these captured waves to classify five porcine tissue types: hard bone, soft bone, muscle, fat, and skin tissue. For automated tissue classification of the measured acoustic waves, we propose three Neural Network (NN) approaches: A Fully-connected Neural Network (FcNN), a one-dimensional Convolutional Neural Network (CNN), and a Recurrent Neural Network (RNN). The time-and the frequency-dependent parts of the measured waves' pressure variation were used as separate inputs to train and validate the designed NNs. In a final step, we used Grad-CAM to find the frequencies' activation map and conclude that the low frequencies are the most important ones for this classification task. In our experiments, we achieved an accuracy of 100 % for the five tissue types for all the proposed NNs. We tested the different classifiers for their robustness and concluded that using frequency-dependent data together with a FcNN is the most robust approach.
Purpose Automated distinct bone segmentation from CT scans is widely used in planning and navigation workflows. U-Net variants are known to provide excellent results in supervised semantic segmentation. However, in distinct bone segmentation from upper-body CTs a large field of view and a computationally taxing 3D architecture are required. This leads to low-resolution results lacking detail or localisation errors due to missing spatial context when using high-resolution inputs. Methods We propose to solve this problem by using end-to-end trainable segmentation networks that combine several 3D U-Nets working at different resolutions. Our approach, which extends and generalizes HookNet and MRN, captures spatial information at a lower resolution and skips the encoded information to the target network, which operates on smaller high-resolution inputs. We evaluated our proposed architecture against single-resolution networks and performed an ablation study on information concatenation and the number of context networks. Results Our proposed best network achieves a median DSC of 0.86 taken over all 125 segmented bone classes and reduces the confusion among similar-looking bones in different locations. These results outperform our previously published 3D U-Net baseline results on the task and distinct bone segmentation results reported by other groups. Conclusion The presented multi-resolution 3D U-Nets address current shortcomings in bone segmentation from upper-body CT scans by allowing for capturing a larger field of view while avoiding the cubic growth of the input pixels and intermediate computations that quickly outgrow the computational capacities in 3D. The approach thus improves the accuracy and efficiency of distinct bone segmentation from upper-body CT.
Automated distinct bone segmentation from CT scans is widely used in planning and navigation workflows. U-Net variants are known to provide excellent results in supervised semantic segmentation. However, in distinct bone segmentation from upper body CTs a large field of view and a computationally taxing 3D architecture are required. This leads to low-resolution results lacking detail or localisation errors due to missing spatial context when using high-resolution inputs. Methods: We propose to solve this problem by using end-to-end trainable segmentation networks that combine several 3D U-Nets working at different resolutions. Our approach, which extends and generalizes HookNet and MRN, captures spatial information at a lower resolution and skips the encoded information to the target network, which operates on smaller high-resolution inputs. We evaluated our proposed architecture against single resolution networks and performed an ablation study on information concatenation and the number of context networks. Results: Our proposed best network achieves a median DSC of 0.86 taken over all 125 segmented bone classes and reduces the confusion among similar-looking bones in different locations. These results outperform our previously published 3D U-Net baseline results on the task and distinct-bone segmentation results reported by other groups.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.