Background and AimsMicrovascular invasion (MVI) is a well-known risk factor for poor prognosis in hepatocellular carcinoma (HCC). This study aimed to develop a deep convolutional neural network (DCNN) model based on contrast-enhanced ultrasound (CEUS) to predict MVI, and thus to predict prognosis in patients with HCC.MethodsA total of 436 patients with surgically resected HCC who underwent preoperative CEUS were retrospectively enrolled. Patients were divided into training (n = 301), validation (n = 102), and test (n = 33) sets. A clinical model (Clinical model), a CEUS video-based DCNN model (CEUS-DCNN model), and a fusion model based on CEUS video and clinical variables (CECL-DCNN model) were built to predict MVI. Survival analysis was used to evaluate the clinical performance of the predicted MVI.ResultsCompared with the Clinical model, the CEUS-DCNN model exhibited similar sensitivity, but higher specificity (71.4% vs. 38.1%, p = 0.03) in the test group. The CECL-DCNN model showed significantly higher specificity (81.0% vs. 38.1%, p = 0.005) and accuracy (78.8% vs. 51.5%, p = 0.009) than the Clinical model, with an AUC of 0.865. The Clinical predicted MVI could not significantly distinguish OS or RFS (both p > 0.05), while the CEUS-DCNN predicted MVI could only predict the earlier recurrence (hazard ratio [HR] with 95% confidence interval [CI 2.92 [1.1–7.75], p = 0.024). However, the CECL-DCNN predicted MVI was a significant prognostic factor for both OS (HR with 95% CI: 6.03 [1.7–21.39], p = 0.009) and RFS (HR with 95% CI: 3.3 [1.23–8.91], p = 0.011) in the test group.ConclusionsThe proposed CECL-DCNN model based on preoperative CEUS video can serve as a noninvasive tool to predict MVI status in HCC, thereby predicting poor prognosis.
Single-cell RNA sequencing (scRNA-seq) has emerged as a powerful tool to gain biological insights at the cellular level. However, due to technical limitations of the existing sequencing technologies, low gene expression values are often omitted, leading to inaccurate gene counts. The available methods, including state-of-the-art deep learning techniques, are incapable of imputing the gene expressions reliably because of the lack of a mechanism to explicitly consider the underlying biological knowledge of the system. Here we tackle the problem in two steps to exploit the gene-gene interactions of the system: (i) we reposition the genes in such a way that their spatial configuration reflects their interactive relationships; and (ii) we use a self-supervised 2D convolutional neural network to extract the contextual features of the interactions from the spatially configured genes and impute the omitted values. Extensive experiments with both simulated and experimental scRNA-seq datasets are carried out to demonstrate the superior performance of the proposed strategy against the existing imputation methods.
Diabetic foot ulcers develop for up to 1 in 3 patients with diabetes. While ulcers are costly to manage and often necessitate an amputation, they are preventable if intervention is initiated early. However, with current standard of care, it is difficult to know which patients are at highest risk of developing an ulcer. Recently, thermal monitoring has been shown to catch the development of complications around 35 days in advance of onset. We seek to use thermal scans of patients' with diabetes feet to automatically detect and classify a patient's risk for foot ulcer development so that intervention may be initiated. We began by comparing performance of various architectures (backbone: DFTnet, ResNet50, and Swin Transformer) trained on visual spectrum images for monofilament task. We moved forward with the highest accuracy model which used ResNet50 as backbone (DFTNet acc. 68.18%, ResNet50 acc. 81.81%, Transformers: acc. 72.72%) to train on thermal images for the risk prediction task and achieved 96.4% acc. To increase interpretability of the model, we then trained this same architecture to predict two standard of care risk scores: high vs low-risk monofilament scores (81.8% accuracy) and high vs low-risk biothesiometer score (77.4% accuracy). We then sought to improve performance by facilitating the model's learning. By annotating feet bounding boxes, we trained our own YoloV4 detector to automatically detect feet in our images (mAp accuracy of 99.7% and IoU of 86.%). By using these bounding box predictions as input to the model, this improved performance of our two classification tasks: MF 84.1%, BT 83.9%. We then sought to further improve the accuracy of these classification tasks with two further experiments implementing visual images of the feet: 1) training the models only on visual images (Risk: 97.6%, MF: 86.3%, BT: 80.6%), 2) concatenating visual images alongside the thermal images either early (E) or late (L) fusion in the architecture (Risk, E: 99.4%, L: 98.8% ; MF, E: 86.4%, L: 90.9%; BT, E: 83.9%, L: 83.9%). Our results demonstrate promise for thermal and visible spectrum images to be capable of providing insight to doctors such that they know which patients to intervene for in order to prevent ulceration and ultimately save the patient's limb.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.