We present a novel numerical routine (oscode) with a C++ and Python interface for the efficient solution of one-dimensional, second-order, ordinary differential equations with rapidly oscillating solutions. The method is based on a Runge-Kutta-like stepping procedure that makes use of the Wentzel-Kramers-Brillouin (WKB) approximation to skip regions of integration where the characteristic frequency varies slowly. In regions where this is not the case, the method is able to switch to a madeto-measure Runge-Kutta integrator that minimises the total number of function evaluations. We demonstrate the effectiveness of the method with example solutions of the Airy equation and an equation exhibiting a burst of oscillations, discussing the error properties of the method in detail. We then show the method applied to physical systems. First, the one-dimensional, time-independent Schrödinger equation is solved as part of a shooting method to search for the energy eigenvalues for a potential with quartic anharmonicity. Then, the method is used to solve the Mukhanov-Sasaki equation describing the evolution of cosmological perturbations, and the primordial power spectrum of the perturbations is computed in different cosmological scenarios. We compare the performance of our solver in calculating a primordial power spectrum of scalar perturbations to that of BINGO, an efficient code specifically designed for such applications, and find that our method performs better.
Physical theories that depend on many parameters or are tested against data from many different experiments pose unique challenges to statistical inference. Many models in particle physics, astrophysics and cosmology fall into one or both of these categories. These issues are often sidestepped with statistically unsound ad hoc methods, involving intersection of parameter intervals estimated by multiple experiments, and random or grid sampling of model parameters. Whilst these methods are easy to apply, they exhibit pathologies even in low-dimensional parameter spaces, and quickly become problematic to use and interpret in higher dimensions. In this article we give clear guidance for going beyond these procedures, suggesting where possible simple methods for performing statistically sound inference, and recommendations of readily-available software tools and standards that can assist in doing so. Our aim is to provide any physicists lacking comprehensive statistical training with recommendations for reaching correct scientific conclusions, with only a modest increase in analysis burden. Our examples can be reproduced with the code publicly available at Zenodo.
<p>Over recent decades, the Arctic has warmed faster than any region on Earth. The rapid decline in Arctic sea ice extent (SIE) is often highlighted as a key indicator of anthropogenic climate change. Changes in sea ice disrupt Arctic wildlife and indigenous communities, and influence weather patterns as far as the mid-latitudes. Furthermore, melting sea ice attenuates the albedo effect by replacing the white, reflective ice with dark, heat-absorbing melt ponds and open sea, increasing the Sun&#8217;s radiative heat input to the Arctic and amplifying global warming through a positive feedback loop. Thus, the reliable prediction of sea ice under a changing climate is of both regional and global importance. However, Arctic sea ice presents severe modelling challenges due to its complex coupled interactions with the ocean and atmosphere, leading to high levels of uncertainty in numerical sea ice forecasts.</p><p>Deep learning (a subset of machine learning) is a family of algorithms that use multiple nonlinear processing layers to extract increasingly high-level features from raw input data. Recent advances in deep learning techniques have enabled widespread success in diverse areas where significant volumes of data are available, such as image recognition, genetics, and online recommendation systems. Despite this success, and the presence of large climate datasets, applications of deep learning in climate science have been scarce until recent years. For example, few studies have posed the prediction of Arctic sea ice in a deep learning framework. We investigate the potential of a fully data-driven, neural network sea ice prediction system based on satellite observations of the Arctic. In particular, we use inputs of monthly-averaged sea ice concentration (SIC) maps since 1979 from the National Snow and Ice Data Centre, as well as climatological variables (such as surface pressure and temperature) from the European Centre for Medium-Range Weather Forecasts reanalysis (ERA5) dataset. Past deep learning-based Arctic sea ice prediction systems tend to overestimate sea ice in recent years - we investigate the potential to learn the non-stationarity induced by climate change with the inclusion of multi-decade global warming indicators (such as average Arctic air temperature). We train the networks to predict SIC maps one month into the future, evaluating network prediction uncertainty by ensembling independent networks with different random weight initialisations. Our model accounts for seasonal variations in the drivers of sea ice by controlling for the month of the year being predicted. We benchmark our prediction system against persistence, linear extrapolation and autoregressive models, as well as September minimum SIE predictions from submissions to the Sea Ice Prediction Network's Sea Ice Outlook. Performance is evaluated quantitatively using the root mean square error and qualitatively by analysing maps of prediction error and uncertainty.</p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.