We survey past work and present new algorithms to numerically integrate the trajectories of Hamiltonian dynamical systems.These algorithms exactly preserve the symplectic 2-form, i.e. they preserve all the Poincar6 invariants. The algorithms have been tested on a variety of examples and results are presented for the Fermi-Pasta-Ulam nonlinear string, the Henon-Heiles system, a four-vortex problem, and the geodesic flow on a manifold of constant negative curvature. In all cases the algorithms possess long-time stability and preserve global geometrical structures in phase space.
For binary classification we establish learning rates up to the order of $n^{-1}$ for support vector machines (SVMs) with hinge loss and Gaussian RBF kernels. These rates are in terms of two assumptions on the considered distributions: Tsybakov's noise assumption to establish a small estimation error, and a new geometric noise condition which is used to bound the approximation error. Unlike previously proposed concepts for bounding the approximation error, the geometric noise assumption does not employ any smoothness assumption.Comment: Published at http://dx.doi.org/10.1214/009053606000001226 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
We propose a rigorous framework for Uncertainty Quantification (UQ) in which the UQ objectives and the assumptions/information set are brought to the forefront. This framework, which we call Optimal Uncertainty Quantification (OUQ), is based on the observation that, given a set of assumptions and information about the problem, there exist optimal bounds on uncertainties: these are obtained as values of well-defined optimization problems corresponding to extremizing probabilities of failure, or of deviations, subject to the constraints imposed by the scenarios compatible with the assumptions and information. In particular, this framework does not implicitly impose inappropriate assumptions, nor does it repudiate relevant information.Although OUQ optimization problems are extremely large, we show that under general conditions they have finite-dimensional reductions. As an application, we develop Optimal Concentration Inequalities (OCI) of Hoeffding and McDiarmid type. Surprisingly, these results show that uncertainties in input parameters, which propagate to output uncertainties in the classical sensitivity analysis paradigm, may fail to do so if the transfer functions (or probability distributions) are imperfectly known. We show how, for hierarchical structures, this phenomenon may lead to the non-propagation of uncertainties or information across scales.In addition, a general algorithmic framework is developed for OUQ and is tested on the Caltech surrogate model for hypervelocity impact and on the seismic safety assessment of truss structures, suggesting the feasibility of the framework for important complex systems.The introduction of this paper provides both an overview of the paper and a self-contained mini-tutorial about basic concepts and issues of UQ.
Given a compact metric space X and a strictly positive Borel measure ν on X, Mercer's classical theorem states that the spectral decomposition of a positive self-adjoint integral operator T k : L 2 (ν) → L 2 (ν) of a continuous k yields a series representation of k in terms of the eigenvalues and -functions of T k . An immediate consequence of this representation is that k is a (reproducing) kernel and that its reproducing kernel Hilbert space can also be described by these eigenvalues and -functions. It is well known that Mercer's theorem has found important applications in various branches of mathematics, including probability theory and statistics. In particular, for some applications in the latter areas, however, it would be highly convenient to have a form of Mercer's theorem for more general spaces X and kernels k. Unfortunately, all extensions of Mercer's theorem in this direction either stick too closely to the original topological structure of X and k, or replace the absolute and uniform convergence by weaker notions of convergence that are not strong enough for many statistical applications. In this work, we fill this gap by establishing several Mercer type series representations for k that, on the one hand, make only very mild assumptions on X and k, and, on the other hand, provide convergence results that are strong enough for interesting applications in, e.g., statistical learning theory. To illustrate the latter, we first use these series representations to describe ranges of fractional powers of T k in terms of interpolation spaces and investigate under which conditions these interpolation spaces are contained in L ∞ (ν). 364 Constr Approx (2012) 35:363-417 For these two results, we then discuss applications related to the analysis of so-called least squares support vector machines, which are a state-of-the-art learning algorithm. Besides these results, we further use the obtained Mercer representations to show that every self-adjoint nuclear operator L 2 (ν) → L 2 (ν) is an integral operator whose representing function k is the difference of two (reproducing) kernels.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.