This paper reports on disruption prediction using a shallow machine learning method known as a random forest, trained on large databases containing only plasma parameters that are available in real-time on Alcator C-Mod, DIII-D, and EAST. The database for each tokamak contains parameters sampled ∼10 6 times throughout ∼10 4 discharges (disruptive and nondisruptive) over the last four years of operation. It is found that a number of parameters (e.g. P rad /P input , i , n/n G , B n=1 /B 0 ) exhibit changes in aggregate as a disruption is approached on one or more of these tokamaks. However, for each machine, the most useful parameters, as well as the details of their precursor behaviors, are markedly different. When the prediction problem is framed using a binary classification scheme to discriminate between time slices 'close to disruption' and 'far from disruption', it is found that the prediction algorithms differ substantially in performance among the three machines on a time slice-by-time slice basis, but have similar disruption detection rates (∼80%-90%) on a shot-by-shot basis after appropriate optimisation. This could have important implications for disruption prediction and avoidance on ITER, for which development of a training database of disruptions may be infeasible. The algorithm's output is interpretable using a method that identifies the most strongly contributing input signals, which may have implications for avoiding disruptive scenarios. To further support its real-time capability, successful applications in inter-shot and real-time environments on EAST and DIII-D are also discussed.
A disruption prediction algorithm, called disruption prediction using random forests (DPRF), has run in real-time in the DIII-D plasma control system (PCS) for more than 900 discharges. DPRF naturally provides a probability mapping associated with its predictions, i.e. the disruptivity signal, now incorporated in the DIII-D PCS. This paper discusses disruption prediction accomplishments in terms of shot-by-shot performances, by simulating alarms on each discharge as in the PCS framework. Depending on the optimised performance metric chosen to evaluate DPRF, we find that almost all disruptive discharges are detected on average with a few hundred milliseconds of warning time, but this comes at a high cost of false alarms produced. Performances do not satisfy ITER requirements, where the success rate has to be higher than 95%, but this is not completely unexpected. DPRF is trained on many years of major disruptions occurring during the flattop phase of the plasma current in DIII-D, but without any differentiation by cause. Furthermore, we find that DPRF is affected by a relatively high fraction of false alarms occurring during the first 500 milliseconds from the flattop onset. This subtle effect, more evident on discharges where DPRF is run in real-time, can be marginalised by taking specific precautions on the validity range of the predictions, and performances do improve. Even if presently burdened by some limitations, DPRF provides an incredible and novel advantage. Thanks to the feature contribution analysis (e.g. the identification of which signals contributed to triggering an alarm), it is possible to interpret and explain DPRF predictions. It is the first time that such interpretability features are exploited by a disruption predictor: by uncovering the causes of the disruption events, a better understanding of disruption dynamics is achieved, and a clear path toward the design of disruption avoidance strategies can be provided.
Introduction: More than 93,000 cases of coronavirus disease have been reported worldwide. We describe the epidemiology, clinical course, and virologic characteristics of the first 12 U.S. patients with COVID-19. Methods:We collected demographic, exposure, and clinical information from 12 patients confirmed by CDC during January 20-February 5, 2020 to have COVID-19. Respiratory, stool, serum, and urine specimens were submitted for SARS-CoV-2 rRT-PCR testing, virus culture, and whole genome sequencing. Results:Among the 12 patients, median age was 53 years (range: 21-68); 8 were male, 10 had traveled to China, and two were contacts of patients in this series. Commonly reported signs and symptoms at illness onset were fever (n=7) and cough (n=8). Seven patients were hospitalized with radiographic evidence of pneumonia and demonstrated clinical or laboratory signs of worsening during the second week of illness. Three were treated with the investigational antiviral remdesivir. All patients had SARS-CoV-2 RNA detected in respiratory specimens, typically for 2-3 weeks after illness onset, with lowest rRT-PCR Ct values often detected in the first week. SARS-CoV-2 RNA was detected after reported symptom resolution in seven patients. SARS-CoV-2 was cultured from respiratory specimens, and SARS-CoV-2 RNA was detected in stool from 7/10 patients. Conclusions:In 12 patients with mild to moderately severe illness, SARS-CoV-2 RNA and viable virus were detected early, and prolonged RNA detection suggests the window for diagnosis is long. Hospitalized patients showed signs of worsening in the second week after illness onset.for use under a CC0 license.
A new model of heating, current drive, torque and other effects of neutral beam injection on NSTX-U that uses neural networks has been developed. The model has been trained and tested on the results of the Monte Carlo code NUBEAM for the database of experimental discharges taken during the first operational campaign of NSTX-U. By projecting flux surface quantities onto empirically derived basis functions, the model is able to efficiently and accurately reproduce the behavior of both scalars, like the total neutron rate and shine through, and profiles, like beam current drive and heating. The model has been tested on the NSTX-U real-time computer, demonstrating a rapid execution time orders of magnitude faster than the Monte Carlo code that is well suited for the iterative calculations needed to interpret experimental results, optimization during scenario development activities, and real-time plasma control applications. Simulation results of a proposed design for a nonlinear observer that embeds the neural network calculations to estimate the poloidal flux profile evolution, as well as and fast ion diffusivity, are presented.
Real-time feedback control based on machine learning algorithms (MLA) was successfully developed and tested on DIII-D plasmas to avoid tearing modes and disruptions while maximizing the plasma performance, which is measured by normalized plasma beta. The control uses MLAs that were trained with ensemble learning methods using only the data available to the real-time Plasma Control System (PCS) from several thousand DIII-D discharges. A “tearability” metric that quantifies the likelihood of the onset of 2/1 tearing modes in a given time window, and a “disruptivity” metric that quantifies the likelihood of the onset of plasma disruptions were first tested off-line and then implemented on the PCS. A real-time control system based on these MLAs was successfully tested on DIII-D discharges, using feedback algorithms to maximize βN while avoiding tearing modes and to dynamically adjust ramp down to avoid high-current disruptions in ramp down.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.