In this study, a full convolutional neural network is trained on a large database of experimental EAST data to classify disruptive discharges and distinguish them from non-disruptive discharges. The database contains 14 diagnostic parameters from the ∼104 discharges (disruptive and non-disruptive). The test set contains 417 disruptive discharges and 999 non-disruptive discharges, which are used to evaluate the performance of the model. The results reveal that the true positive (TP) rate is ∼ 0.827, while the false positive (FP) rate is ∼0.067. This indicates that 72 disruptive discharges and 67 non-disruptive discharges are misclassified in the test set. The FPs are investigated in detail and are found to emerge due to some subtle disturbances in the signals, which lead to misjudgment of the model. Therefore, hundreds of non-disruptive discharges from training set, containing time slices of small disturbances, are artificially added into the training database for retraining the model. The same test set is used to assess the performance of the improved model. The TP rate of the improved model increases up to 0.875, while its FP rate decreases to 0.061. Overall, the proposed data-driven predicted model exhibits immense potential for application in long pulse fusion devices such as ITER.
Purpose Surgery is the predominant treatment modality of human glioma but suffers difficulty on clearly identifying tumor boundaries in clinic. Conventional practice involves neurosurgeon’s visual evaluation and intraoperative histological examination of dissected tissues using frozen section, which is time-consuming and complex. The aim of this study was to develop fluorescent imaging coupled with artificial intelligence technique to quickly and accurately determine glioma in real-time during surgery. Methods Glioma patients (N = 23) were enrolled and injected with indocyanine green for fluorescence image–guided surgery. Tissue samples (N = 1874) were harvested from surgery of these patients, and the second near-infrared window (NIR-II, 1000–1700 nm) fluorescence images were obtained. Deep convolutional neural networks (CNNs) combined with NIR-II fluorescence imaging (named as FL-CNN) were explored to automatically provide pathological diagnosis of glioma in situ in real-time during patient surgery. The pathological examination results were used as the gold standard. Results The developed FL-CNN achieved the area under the curve (AUC) of 0.945. Comparing to neurosurgeons’ judgment, with the same level of specificity >80%, FL-CNN achieved a much higher sensitivity (93.8% versus 82.0%, P < 0.001) with zero time overhead. Further experiments demonstrated that FL-CNN corrected >70% of the errors made by neurosurgeons. FL-CNN was also able to rapidly predict grade and Ki-67 level (AUC 0.810 and 0.625) of tumor specimens intraoperatively. Conclusion Our study demonstrates that deep CNNs are better at capturing important information from fluorescence images than surgeons’ evaluation during patient surgery. FL-CNN is highly promising to provide pathological diagnosis intraoperatively and assist neurosurgeons to obtain maximum resection safely. Trial registration ChiCTR ChiCTR2000029402. Registered 29 January 2020, retrospectively registered
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.