We show how to tune quantum noise in nonlinear systems by means of periodic spatial modulation. We prove that the introduction of an intracavity photonic crystal in a multimode optical parametric oscillator inhibits and enhances light quantum fluctuations. Furthermore, it leads to a significant noise reduction in field quadratures, robustness of squeezing in a wider angular range, and spatial entanglement. These results have potential benefits for quantum imaging, metrology, and quantum information applications and suggest a control mechanism of fluctuations by spatial modulation of interest also in other nonlinear systems.
Abstract. Mesocosm experiments on phytoplankton dynamics under high CO 2 concentrations mimic the response of marine primary producers to future ocean acidification. However, potential acidification effects can be hindered by the high standard deviation typically found in the replicates of the same CO 2 treatment level. In experiments with multiple unresolved factors and a sub-optimal number of replicates, post-processing statistical inference tools might fail to detect an effect that is present. We propose that in such cases, data-based model analyses might be suitable tools to unearth potential responses to the treatment and identify the uncertainties that could produce the observed variability. As test cases, we used data from two independent mesocosm experiments. Both experiments showed high standard deviations and, according to statistical inference tools, biomass appeared insensitive to changing CO 2 conditions. Conversely, our simulations showed earlier and more intense phytoplankton blooms in modeled replicates at high CO 2 concentrations and suggested that uncertainties in average cell size, phytoplankton biomass losses, and initial nutrient concentration potentially outweigh acidification effects by triggering strong variability during the bloom phase. We also estimated the thresholds below which uncertainties do not escalate to high variability. This information might help in designing future mesocosm experiments and interpreting controversial results on the effect of acidification or other pressures on ecosystem functions.
<p>The presence of automated decision making continuously increases in today's society. Algorithms based in machine and deep learning decide how much we pay for insurance,&#160; translate our thoughts to speech, and shape our consumption of goods (via e-marketing) and knowledge (via search engines). Machine and deep learning models are ubiquitous in science too, in particular, many promising examples are being developed to prove their feasibility for earth sciences applications, like finding temporal trends or spatial patterns in data or improving parameterization schemes for climate simulations.&#160;</p><p>However, most machine and deep learning applications aim to optimise performance metrics (for instance, accuracy, which stands for the times the model prediction was right), which are rarely good indicators of trust (i.e., why these predictions were right?). In fact, with the increase of data volume and model complexity, machine learning and deep learning&#160; predictions can be very accurate but also prone to rely on spurious correlations, encode and magnify bias, and draw conclusions that do not incorporate the underlying dynamics governing the system. Because of that, the uncertainty of the predictions and our confidence in the model are difficult to estimate and the relation between inputs and outputs becomes hard to interpret.&#160;</p><p>Since it is challenging to shift a community from &#8220;black&#8221; to &#8220;glass&#8221; boxes, it is more useful to implement Explainable Artificial Intelligence (XAI) techniques right at the beginning of the machine learning and deep learning adoption rather than trying to fix fundamental problems later. The good news is that most of the popular XAI techniques basically are sensitivity analyses because they consist of a systematic perturbation of some model components in order to observe how it affects the model predictions. The techniques comprise random sampling, Monte-Carlo simulations, and ensemble runs, which are common methods in geosciences. Moreover, many XAI techniques are reusable because they are model-agnostic and must be applied after the model has been fitted. In addition, interpretability provides robust arguments when communicating machine and deep learning predictions to scientists and decision-makers.</p><p>In order to assist not only the practitioners but also the end-users in the evaluation of&#160; machine and deep learning results, we will explain the intuition behind some popular techniques of XAI and aleatory and epistemic Uncertainty Quantification: (1) the Permutation Importance and Gaussian processes on the inputs (i.e., the perturbation of the model inputs), (2) the Monte-Carlo Dropout, Deep ensembles, Quantile Regression, and Gaussian processes on the weights (i.e, the perturbation of the model architecture), (3) the Conformal Predictors (useful to estimate the confidence interval on the outputs), and (4) the Layerwise Relevance Propagation (LRP), Shapley values, and Local Interpretable Model-Agnostic Explanations (LIME) (designed to visualize how each feature in the data affected a particular prediction). We will also introduce some best-practises, like the detection of anomalies in the training data before the training, the implementation of fallbacks when the prediction is not reliable, and physics-guided learning by including constraints in the loss function to avoid physical inconsistencies, like the violation of conservation laws.&#160;</p>
We study the effects of transverse spatial modulations in a multimode degenerate optical parametric oscillator. Intracavity photonic crystals allow us to tune the instability threshold and improve entanglement above threshold. Here we compare such results with the case in which the modulation is in the injected field profile.
<p><strong>Abstract.</strong> Mesocosm experiments on phytoplankton dynamics under high CO<sub>2</sub> concentrations mimic the response of marine primary producers to future ocean acidification. However, potential acidification effects can be hindered by the high standard deviation typically found in the distribution of the replicates exposed to the same treatment. In experiments with multiple unresolved factors and a suboptimal number of replicates, post-processing statistical inference tools may fail to detect an effect. In such cases, model-based data analyses are suitable tools to unearth potential responses to the treatment and to identify which uncertainties may give rise to the observed divergences. As test cases, we use data showing high variability from two independent mesocosm experiments, where, according to statistical inference tools, biomass appeared insensitive to changing CO<sub>2</sub> conditions. Our simulations, in stead, show earlier and more intense phytoplankton blooms in modeled replicates at high CO<sub>2</sub> concentrations and suggest that uncertainties in average cell size, phytoplankton biomass losses and initial nutrient concentration potentially outweigh acidification effects by triggering strong variability during the bloom phase. We also estimate the thresholds below which uncertainties do not escalate into high variability. This information may help to interpret controversial results about acidification and to design future mesocosm experiments.</p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.