The supraspinatus muscle volume increased immediately postoperatively and continuously for at least 1 year after surgery. The increase was evident in patients who had larger tears and healed successfully and when measured toward the more medial portion of the supraspinatus muscle. The volume increases were associated with an increase in shoulder abduction power.
Rotator cuff tear (RCT) is one of the most common shoulder injuries. When diagnosing RCT, skilled orthopedists visually interpret magnetic resonance imaging (MRI) scan data. For automated and accurate diagnosis of RCT, we propose a full 3D convolutional neural network (CNN) based method using deep learning. This 3D CNN automatically diagnoses the presence or absence of an RCT, classifies the tear size, and provides 3D visualization of the tear location. To train the 3D CNN, the Voxception-ResNet (VRN) structure was used. This architecture uses 3D convolution filters, so it is advantageous in extracting information from 3D data compared with 2D-based CNNs or traditional diagnosis methods. MRI data from 2,124 patients were used to train and test the VRN-based 3D CNN. The network is trained to classify RCT into five classes (None, Partial, Small, Medium, Large-to-Massive). A 3D class activation map (CAM) was visualized by volume rendering to show the localization and size information of RCT in 3D. A comparative experiment was performed for the proposed method and clinical experts by using randomly selected 200 test set data, which had been separated from training set. The VRN-based 3D CNN outperformed orthopedists specialized in shoulder and general orthopedists in binary accuracy (92.5% vs. 76.4% and 68.2%), top-1 accuracy (69.0% vs. 45.8% and 30.5%), top-1±1 accuracy (87.5% vs. 79.8% and 71.0%), sensitivity (0.94 vs. 0.86 and 0.90), and specificity (0.90 vs. 0.58 and 0.29). The generated 3D CAM provided effective information regarding the 3D location and size of the tear. Given these results, the proposed method demonstrates the feasibility of artificial intelligence that can assist in clinical RCT diagnosis.
Sophisticated segmentation of the craniomaxillofacial bones (the mandible and maxilla) in computed tomography (CT) is essential for diagnosis and treatment planning for craniomaxillofacial surgeries. Conventional manual segmentation is time-consuming and challenging due to intrinsic properties of craniomaxillofacial bones and head CT such as the variance in the anatomical structures, low contrast of soft tissue, and artifacts caused by metal implants. However, data-driven segmentation methods, including deep learning, require a large consistent dataset, which creates a bottleneck in their clinical applications due to limited datasets. In this study, we propose a deep learning approach for the automatic segmentation of the mandible and maxilla in CT images and enhanced the compatibility for multi-center datasets. Four multi-center datasets acquired by various conditions were applied to create a scenario where the model was trained with one dataset and evaluated with the other datasets. For the neural network, we designed a hierarchical, parallel and multi-scale residual block to the U-Net (HPMR-U-Net). To evaluate the performance, segmentation with in-house dataset and with external datasets from multi-center were conducted in comparison to three other neural networks: U-Net, Res-U-Net and mU-Net. The results suggest that the segmentation performance of HPMR-U-Net is comparable to that of other models, with superior data compatibility.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.