This contribution aims to improve existing scalings of the L-mode power decay length λ q o m p , especially for plasma configurations with strike points at the ITER-relevant location—closed vertical divertor targets. We propose 13 new λ q o m p scalings based on data from the tokamaks JET, EAST, MAST, Alcator C-mod and COMPASS, and validate them against the output of the 2D turbulence code HESEL. The analysis covers 500 divertor heat flux profiles (obtained by probes or IR cameras), measured in L-mode discharges with varying 12 global plasma parameters (all well predictable). We find that the two previously published scalings (Eich 2013 J. Nucl. Mat. 438 S72) and (Scarabosio 2013 J. Nucl. Mat. 438 S426), which were based on outer target data from AUG and JET, describe the JET, C-mod and COMPASS profiles well. This holds not only at the outer horizontal and vertical targets, but surprisingly also at the inner vertical targets. In contrast, EAST, HESEL and especially MAST data are poorly described by these two scalings. We therefore derive 13 new scalings, which account for 85–92 % of the measured λ q o m p variability across all five tokamaks. Although each of the scalings is based on a different parameter combination, their predictions for the ITER and COMPASS-Upgrade tokamaks are very similar. Just before the L-H transition in the ITER baseline scenario, the presented scalings predict values λ q o m p = 3.0 ± 0.5 mm. For the COMPASS-Upgrade tokamak, all the scalings predict λ q o m p = 2.1 ± 0.5 mm with a single exception of the scaling based on the stored plasma energy which predicts only 1.2 mm for both tokamaks. We encourage the reader to use as many of these scalings as possible, depending on available data. In attached plasma and using significant assumptions, our results imply steady-state surface-perpendicular heat flux around 10 MW/m2 for ITER, and 20 MW/m2 for COMPASS-Upgrade.
We implement an all-optical setup demonstrating kernel-based quantum machine learning for twodimensional classification problems. In this hybrid approach, kernel evaluations are outsourced to projective measurements on suitably designed quantum states encoding the training data, while the model training is processed on a classical computer. Our two-photon proposal encodes data points in a discrete, eight-dimensional feature Hilbert space. In order to maximize the application range of the deployable kernels, we optimize feature maps towards the resulting kernels' ability to separate points, i.e., their "resolution," under the constraint of finite, fixed Hilbert space dimension. Implementing these kernels, our setup delivers viable decision boundaries for standard nonlinear supervised classification tasks in feature space. We demonstrate such kernel-based quantum machine learning using specialized multiphoton quantum optical circuits. The deployed kernel exhibits exponentially better scaling in the required number of qubits than a direct generalization of kernels described in the literature. Many contemporary computational problems (like drug design, traffic control, logistics, automatic driving, stock market analysis, automatic medical examination, material engineering, and others) routinely require optimization over huge amounts of data 1. While these highly demanding problems can often be approached by suitable machine learning (ML) algorithms, in many relevant cases the underlying calculations would last prohibitively long. Quantum ML (QML) comes with the promise to run these computations more efficiently (in some cases exponentially faster) by complementing ML algorithms with quantum resources. The resulting speed-up can then be associated with the collective processing of quantum information mediated by quantum entanglement. There are various approaches to QML, including linear algebra solvers, sampling, quantum optimization, or the use of quantum circuits as trainable models for inference (see, e.g., Refs. 2-18). A strong focus in QML has been on deep learning and neural networks. Independently, kernel-based approaches to supervised QML, where computational kernel evaluations are replaced by suitable quantum measurements, have recently been proposed 10,12 as interesting alternatives. Combining classical and quantum computations, they add to the family of quantum-classical hybrid algorithms. Kernel-based QML (KQML)is particularly attractive to be implemented on linear-optics platforms, as quantum memories are not required. Here, we thus investigate the prospect of KQML with multiphoton quantum optical circuits. To this end, we propose kernels that scale exponentially better in the number of required qubits than a direct generalization of kernels previously discussed in the literature 12. We also realize this scheme in a proof-of-principle experiment demonstrating its suitability on the platform of linear optics, thus, proving its practical applicability with current state of quantum technologies. Let us expla...
The ATLAS Forward Proton (AFP) detector is intended to measure protons scattered at small angles from the ATLAS interaction point. To this end, a combination of 3D Silicon pixel tracking modules and Quartz-Cherenkov time-of-flight (ToF) detectors is installed 210 m away from the interaction point at both sides of ATLAS. Beam tests with an AFP prototype detector combining tracking and timing sub-detectors and a common readout have been performed at the CERN-SPS test-beam facility in November 2014 and September 2015 to complete the system integration and to study the detector performance. The successful tracking-timing integration was demonstrated. Good tracker hit efficiencies above 99.9% at a sensor tilt of 14 • , as foreseen for AFP, were observed. Spatial resolutions in the short pixel direction with 50 µm pitch of 5.5 ± 0.5 µm per pixel plane and of 2.8 ± 0.5 µm for the full four-plane tracker at 14 • were found, largely surpassing the AFP requirement of 10 µm. The timing detector showed also good hit efficiencies above 99%, and a full-system time resolution of 35 ± 6 ps was found for the ToF prototype detector with two Quartz bars in-line (half the final AFP size) without dedicated optimisation, fulfilling the requirements for initial low-luminosity AFP runs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.