With the mushrooming use of computed tomography (CT) images in clinical decision making, management of CT data becomes increasingly difficult. From the patient identification perspective, using the standard DICOM tag to track patient information is challenged by issues such as misspelling, lost file, site variation, etc. In this paper, we explore the feasibility of leveraging the faces in 3D CT images as biometric features. Specifically, we propose an automatic processing pipeline that first detects facial landmarks in 3D for ROI extraction and then generates aligned 2D depth images, which are used for automatic recognition. To boost the recognition performance, we employ transfer learning to reduce the data sparsity issue and to introduce a group sampling strategy to increase inter-class discrimination when training the recognition network. Our proposed method is capable of capturing underlying identity characteristics in medical images while reducing memory consumption. To test its effectiveness, we curate 600 3D CT images of 280 patients from multiple sources for performance evaluation. Experimental results demonstrate that our method achieves a 1:56 identification accuracy of 92.53% and a 1:1 verification accuracy of 96.12%, outperforming other competing approaches.
Background: Mitral regurgitation (MR) is the most common valve lesion worldwide. However, the quantitative assessment of MR severity based on current guidelines is challenging and time-consuming; strict adherence to applying these guidelines is therefore relatively infrequent. We aimed to develop an automatic, reliable and reproducible artificial intelligence (AI) diagnostic system to assist physicians in grading MR severity based on color video Doppler echocardiography via a self-supervised learning (SSL) algorithm.
Methods:We constructed a retrospective cohort of 2,766 consecutive echocardiographic studies of patients with MR diagnosed based on clinical criteria from two hospitals in China. One hundred and forty-eight studies with reference standards were selected in the main analysis and also served as the test set for the AI segmentation model. Five hundred and ninety-two and 148 studies were selected with stratified random sampling as the training and validation datasets, respectively. The self-supervised algorithm captures features and segments the MR jet and left atrium (LA) area, and the output is used to assist physicians in MR severity grading. The diagnostic performance of physicians without and with the support from AI was estimated and compared.Results: The performance of SSL algorithm yielded 89.2% and 85.3% average segmentation dice similarity coefficient (DICE) on the validation and test datasets, which achieved 6.2% and 8.1% improvement compared to Residual U-shape Network (ResNet-UNet), respectively. When physicians were provided the output of algorithm for grading MR severity, the sensitivity increased from 77.0% (95% CI: 70.9-82.1%) to 86.7% (95% CI: 80.3-91.2%) and the specificity was largely unchanged: 91.5% (95% CI: 87.8-94.1%) vs. 90.5% (95% CI: 86.7-93.2%).
Conclusions:This study provides a new, practical, accurate, plug-and-play AI-assisted approach for assisting physicians in MR severity grading that can be easily implemented in clinical practice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.