In the case of quantum systems interacting with multiple environments, the time-evolution of the reduced density matrix is described by the Liouvillian. For a variety of physical observables, the long-time limit or steady state solution is needed for the computation of desired physical observables. For inverse design or optimal control of such systems, the common approaches are based on brute-force search strategies. Here, we present a novel methodology, based on automatic differentiation, capable of differentiating the steady state solution with respect to any parameter of the Liouvillian. Our approach has a low memory cost, and is agnostic to the exact algorithm for computing the steady state. We illustrate the advantage of this method by inverse designing the parameters of a quantum heat transfer device that maximizes the heat current and the rectification coefficient. Additionally, we optimize the parameters of various Lindblad operators used in the simulation of energy transfer under natural incoherent light. We also present a sensitivity analysis of the steady state for energy transfer under natural incoherent light as a function of the incoherent- light pumping rate.
We propose a new class of parameterizations for spatio-temporal point processes which leverage Neural ODEs as a computational method and enable flexible, highfidelity models of discrete events that are localized in continuous time and space. Central to our approach is a combination of recurrent continuous-time neural networks with two novel neural architectures, i.e., Jump and Attentive Continuoustime Normalizing Flows. This approach allows us to learn complex distributions for both the spatial and temporal domain and to condition non-trivially on the observed event history. We validate our models on data sets from a wide variety of contexts such as seismology, epidemiology, urban mobility, and neuroscience.
Standard first-order stochastic optimization algorithms base their updates solely on the average mini-batch gradient, and it has been shown that tracking additional quantities such as the curvature can help de-sensitize common hyperparameters. Based on this intuition, we explore the use of exact per-sample Hessian-vector products and gradients to construct optimizers that are self-tuning and hyperparameter-free. Based on a dynamics model of the gradient, we derive a process which leads to a curvature-corrected, noise-adaptive online gradient estimate. The smoothness of our updates makes it more amenable to simple step size selection schemes, which we also base off of our estimates quantities. We prove that our model-based procedure converges in the noisy quadratic setting. Though we do not see similar gains in deep learning tasks, we can match the performance of well-tuned optimizers and ultimately, this is an interesting step for constructing self-tuning optimizers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.