Background: The mass, or binding energy, is the basis property of the atomic nucleus. It determines its stability, and reaction and decay rates. Quantifying the nuclear binding is important for understanding the origin of elements in the universe. The astrophysical processes responsible for the nucleosynthesis in stars often take place far from the valley of stability, where experimental masses are not known. In such cases, missing nuclear information must be provided by theoretical predictions using extreme extrapolations. In order to take full advantage of the information contained in mass model residuals, i.e., deviations between experimental and calculated masses, one can utilize Bayesian machine learning techniques to improve predictions.Purpose: In order to improve the quality of model-based predictions of nuclear properties of rare isotopes far from stability, we consider the information contained in the residuals in the regions where the experimental information exist. As a case in point, we discuss two-neutron separation energies S2n of even-even nuclei. Through this observable, we assess the predictive power of global mass models towards more unstable neutron-rich nuclei and provide uncertainty quantification of predictions.Methods: We consider 10 global models based on nuclear Density Functional Theory with realistic energy density functionals as well as two more phenomenological mass models. The emulators of S2n residuals and credibility intervals (Bayesian confidence intervals) defining theoretical error bars are constructed using Bayesian Gaussian processes and Bayesian neural networks. We consider a large training dataset pertaining to nuclei whose masses were measured before 2003. For the testing datasets, we considered those exotic nuclei whose masses have been determined after 2003. By establishing statistical methodology and parameters, we carried out extrapolations towards the 2n dripline.Results: While both Gaussian processes and Bayesian neural networks reduce the rms deviation from experiment significantly, GP offers a better and much more stable performance. The increase in the predictive power of microscopic models aided by the statistical treatment is quite astonishing: the resulting rms deviations from experiment on the testing dataset are similar to those of more phenomenological models. We found that Bayesian neural networks results are prone to instabilities caused by the large number of parameters in this method. Moreover, since the classical sigmoid activation function used in this approach has linear tails that do not vanish, it is poorly suited for a bounded extrapolation. The empirical coverage probability curves we obtain match very well the reference values, in a slightly conservative way in most cases, which is highly desirable to ensure honesty of uncertainty quantification. The estimated credibility intervals on predictions make it possible to evaluate predictive power of individual models, and also make quantified predictions using groups of models. Conclusions:The propose...
The region of heavy calcium isotopes forms the frontier of experimental and theoretical nuclear structure research where the basic concepts of nuclear physics are put to stringent test. The recent discovery of the extremely neutron-rich nuclei around 60 Ca [1] and the experimental determination of masses for 55−57 Ca [2] provide unique information about the binding energy surface in this region. To assess the impact of these experimental discoveries on the nuclear landscape's extent, we use global mass models and statistical machine learning to make predictions, with quantified levels of certainty, for bound nuclides between Si and Ti. Using a Bayesian model averaging analysis based on Gaussianprocess-based extrapolations we introduce the posterior probability pex for each nucleus to be bound to neutron emission. We find that extrapolations for drip-line locations, at which the nuclear binding ends, are consistent across the global mass models used, in spite of significant variations between their raw predictions. In particular, considering the current experimental information and current global mass models, we predict that 68 Ca has an average posterior probability pex ≈ 76% to be bound to two-neutron emission while the nucleus 61 Ca is likely to decay by emitting a neutron (pex ≈ 46%).
Background: The chart of the nuclides is limited by particle drip lines beyond which nuclear stability to proton or neutron emission is lost. Predicting the range of particle-bound isotopes poses an appreciable challenge for nuclear theory as it involves extreme extrapolations of nuclear masses well beyond the regions where experimental information is available. Still, quantified extrapolations are crucial for a wide variety of applications, including the modeling of stellar nucleosynthesis.Purpose: We use microscopic nuclear global mass models, current mass data, and Bayesian methodology to provide quantified predictions of proton and neutron separation energies as well as Bayesian probabilities of existence throughout the nuclear landscape all the way to the particle drip lines. Methods:We apply nuclear density functional theory with several energy density functionals. We also consider two global mass models often used in astrophysical nucleosynthesis simulations. To account for uncertainties, Bayesian Gaussian processes are trained on the separation-energy residuals for each individual model, and the resulting predictions are combined via Bayesian model averaging. This framework allows to account for systematic and statistical uncertainties and propagate them to extrapolative predictions. Results:We establish and characterize the drip-line regions where the probability that the nucleus is particlebound decreases from 1 to 0. In these regions, we provide quantified predictions for one-and two-nucleon separation energies. According to our Bayesian model averaging analysis, 7759 nuclei with Z ≤ 119 have a probability of existence ≥ 0.5. Conclusions:The extrapolation results obtained in this study will be put through stringent tests when new experimental information on existence and masses of exotic nuclei becomes available. In this respect, the quantified landscape of nuclear existence obtained in this study should be viewed as a dynamical prediction that will be fine-tuned when new experimental information and improved global mass models become available.
Until recently, uncertainty quantification in low energy nuclear theory was typically performed using frequentist approaches. However in the last few years, the field has shifted toward Bayesian statistics for evaluating confidence intervals. Although there are statistical arguments to prefer the Bayesian approach, no direct comparison is available. In this work, we compare, directly and systematically, the frequentist and Bayesian approaches to quantifying uncertainties in direct nuclear reactions. Starting from identical initial assumptions, we determine confidence intervals associated with the elastic and the transfer process for both methods, which are evaluated against data via a comparison of the empirical coverage probabilities. Expectedly, the frequentist approach is not as flexible as the Bayesian approach in exploring parameter space and often ends up in a different minimum. We also show that the two methods produce significantly different correlations. In the end, the frequentist approach produces significantly narrower uncertainties on the considered observables than the Bayesian. Our study demonstrates that the uncertainties on the reaction observables considered here within the Bayesian approach represent reality more accurately than the much narrower uncertainties obtained using the standard frequentist approach.
Background: The limits of the nuclear landscape are determined by nuclear binding energies. Beyond the proton drip lines, where the separation energy becomes negative, there is not enough binding energy to prevent protons from escaping the nucleus. Predicting properties of unstable nuclear states in the vast territory of proton emitters poses an appreciable challenge for nuclear theory as it often involves far extrapolations. In addition, significant discrepancies between nuclear models in the proton-rich territory call for quantified predictions.Purpose: With the help of Bayesian methodology, we mix a family of nuclear mass models corrected with statistical emulators trained on the experimental mass measurements, in the proton-rich region of the nuclear chart.Methods: Separation energies were computed within nuclear density functional theory using several Skyrme and Gogny energy density functionals. We also considered mass predictions based on two models used in astrophysical studies. Quantified predictions were obtained for each model using Bayesian Gaussian processes trained on separation-energy residuals and combined via Bayesian model averaging. Results:We obtained a good agreement between averaged predictions of statistically corrected models and experiment. In particular, we quantified model results for one-and two-proton separation energies and derived probabilities of proton emission. This information enabled us to produce a quantified landscape of proton-rich nuclei. The most promising candidates for two-proton decay studies have been identified. Conclusions:The methodology used in this work has broad applications to model-based extrapolations of various nuclear observables. It also provides a reliable uncertainty quantification of theoretical predictions.
We study the information content of nuclear masses from the perspective of global models of nuclear binding energies. To this end, we employ a number of statistical methods and diagnostic tools, including Bayesian calibration, Bayesian model averaging, chi-square correlation analysis, principal component analysis and empirical coverage probability. Using a Bayesian framework, we investigate the structure of the four-parameter liquid drop model by considering discrepant mass domains for calibration. We then use the chi-square correlation framework to analyze the 14-parameter Skyrme energy density functional calibrated using homogeneous and heterogeneous datasets. We show that quite a dramatic parameter reduction can be achieved in both cases. The advantage of Bayesian model averaging for improving uncertainty quantification is demonstrated. The statistical approaches used are pedagogically described; in this context this work can serve as a guide for future applications.
In two new papers (Biermé et al., 2013) and (Nourdin and Peccati, 2015), sharp general quantitative bounds are given to complement the wellknown fourth moment theorem of Nualart and Peccati, by which a sequence in a fixed Wiener chaos converges to a normal law if and only if its fourth cumulant converges to 0. The bounds show that the speed of convergence is precisely of order the maximum of the fourth cumulant and the absolute value of the third moment (cumulant). Specializing to the case of normalized centered quadratic variations for stationary Gaussian sequences, we show that a third moment theorem holds: convergence occurs if and only if the sequence's third moments tend to 0. This is proved for sequences with general decreasing covariance, by using the result of (Nourdin and Peccati, 2015), and finding the exact speed of convergence to 0 of the quadratic variation's third and fourth cumulants. (Nourdin and Peccati, 2015) also allows us to derive quantitative estimates for the speeds of convergence in a class of log-modulated covariance structures, which puts in perspective the notion of critical Hurst parameter when studying the convergence of fractional Brownian motion's quadratic variation. We also study the speed of convergence when the limit is not Gaussian but rather a second-Wiener-chaos law. Using a log-modulated class of spectral densities, we recover a classical result of Dobrushin-Major/Taqqu whereby the limit is a Rosenblatt law, and we provide new convergence speeds. The conclusion in this case is that the price to pay to obtain a Rosenblatt limit despite a slowly varying modulation is a very slow convergence speed, roughly of the same order as the modulation.1991 Mathematics Subject Classification. 60G15, 60F05, 60H07, 60G22.
Background: Negative nurse work environments have been associated with nurse bullying and poor nurse health. However, few studies have examined the influence of nurse bullying on actual patient outcomes. Purpose: The purpose of the study was to examine the association between nurse-reported bullying and documented nursing-sensitive patient outcomes. Methods: Nurses (n = 432) in a large US hospital responded to a survey on workplace bullying. Unit-level data for 5 adverse patient events and nurse staffing were acquired from the National Database of Nursing Quality Indicators. Generalized linear models were used to examine the association between bullying and adverse patient events. A Bayesian regression analysis was used to confirm the findings. Results: After controlling for nurse staffing and qualification, nurse-reported bullying was significantly associated with the incidence of central-line-associated bloodstream infections (P < .001). Conclusions: Interventions to address bullying, a malleable aspect of the nurse practice environment, may help to reduce adverse patient events.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.