The joint evaluated fission and fusion nuclear data library 3.3 is described. New evaluations for neutroninduced interactions with the major actinides 235 U, 238 U and 239 Pu, on 241 Am and 23 Na, 59 Ni, Cr, Cu, Zr, Cd, Hf, W, Au, Pb and Bi are presented. It includes new fission yields, prompt fission neutron spectra and average number of neutrons per fission. In addition, new data for radioactive decay, thermal neutron scattering, gamma-ray emission, neutron activation, delayed neutrons and displacement damage are presented. JEFF-3.3 was complemented by files from the TENDL project. The libraries for photon, proton, deuteron, triton, helion and alpha-particle induced reactions are from TENDL-2017. The demands for uncertainty quantification in modeling led to many new covariance data for the evaluations. A comparison between results from model calculations using the JEFF-3.3 library and those from benchmark experiments for criticality, delayed neutron yields, shielding and decay heat, reveals that JEFF-3.3 performes very well for a wide range of nuclear technology applications, in particular nuclear energy.
Two methodologies to propagate the uncertainties on the nuclide inventory in combined Monte Carlo-spectrum and burn-up calculations are presented, based on sensitivity/uncertainty and random sampling techniques (uncertainty Monte Carlo method). Both enable the assessment of the impact of uncertainties in the nuclear data as well as uncertainties due to the statistical nature of the Monte Carlo neutron transport calculation. The methodologies are implemented in our MCNP-ACAB system, which combines the neutron transport code MCNP-4C and the inventory code ACAB.A high burn-up benchmark problem is used to test the MCNP-ACAB performance in inventory predictions, with no uncertainties. A good agreement is found with the results of other participants.This benchmark problem is also used to assess the impact of nuclear data uncertainties and statistical flux errors in high burn-up applications. A detailed calculation is performed to evaluate the effect of cross-section uncertainties in the inventory prediction, taking into account the temporal evolution of the neutron flux level and spectrum. Very large uncertainties are found at the unusually high burn-up of this exercise (800 MWd/kgHM). To compare the impact of the statistical errors in the calculated flux with respect to the cross uncertainties, a simplified problem is considered, taking a constant neutron flux level and spectrum. It is shown that, provided that the flux statistical deviations in the Monte Carlo transport calculation do not exceed a given value, the effect of the flux errors in the calculated isotopic inventory are negligible (even at very high burn-up) compared to the effect of the large cross-section uncertainties available at present in the data files.
The impact of the current nuclear data library covariances such as in ENDF/B-VII.1, JEFF-3.2, JENDL-4.0, SCALE and TENDL, for relevant current reactors is presented in this work. The uncertainties due to nuclear data are calculated for existing P\.VR and BWR fuel assemblies (with burn-up up to 40 GWd/tHM, followed by 10 years of cooling time) and for a simplified PWR full core model (without burn-up) for quantities such as k∞, macroscopic cross sections, pin power or isotope inventory. In this work, the method of propagation of uncertainties is based on random sampling of nuclear data, either from covariance files or directly from basic parameters. Additionally, possible biases on calculated quantities are investigated such as the self-shielding treatment. Different calculation schemes are used, based on CASMO, SCALE, DRAGON, MCNP or FISPACT-11, thus simulating real-life assignments for technical-support organizations. The outcome of such a study is a comparison of uncertainties with two consequences. One: although this study is not expected to lead to similar results between the involved calculation schemes, it provides an insight on what can happen when calculating uncertainties and allows to give some perspectives on the range of validity on these uncertainties. Two: it allows to dress a picture of the state of the knowledge as of today, using existing nuclear data library covariances and current methods.
Abstract. The TALYS Evaluated Nuclear Data Library (TENDL) has now 8 releases since 2008. Considerable experience has been acquired for the production of such general-purpose nuclear data library based on the feedback from users, evaluators and processing experts. The backbone of this achievement is simple and robust: completeness, quality and reproducibility. If TENDL is extensively used in many fields of applications, it is necessary to understand its strong points and remaining weaknesses. Alternatively, the essential knowledge is not the TENDL library itself, but rather the necessary method and tools, making the library a side product and focusing the efforts on the evaluation knowledge. The future of such approach will be discussed with the hope of nearby greater success.
Several methodologies using different levels of approximations have been developed for propagating nuclear data uncertainties in nuclear burn-up simulations. Most methods fall into the two broad classes of Monte Carlo approaches, which are exact apart from statistical uncertainties but require additional computation time, and first order perturbation theory approaches, which are efficient for not too large numbers of considered response functions but only applicable for sufficiently small nuclear data uncertainties. Some methods neglect isotopic composition uncertainties induced by the depletion steps of the simulations, others neglect neutron flux uncertainties, and the accuracy of a given approximation is often very hard to quantify. In order to get a better sense of the impact of different approximations, this work aims to compare results obtained based on different approximate methodologies with an exact method, namely the NUDUNA Monte Carlo based approach developed by AREVA GmbH. In addition, the impact of different covariance data is studied by comparing two of the presently most complete nuclear data covariance libraries (ENDF/B-VII.1 and SCALE 6.0), which reveals a high dependency of the uncertainty estimates on the source of covariance data. The burn-up benchmark Exercise I-1b proposed by the OECD expert group "Benchmarks for Uncertainty Analysis in Modeling (UAM) for the Design, Operation and Safety Analysis of LWRs" is studied as an example application. The burn-up simulations are performed with the SCALE 6.0 tool suite.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.