In this paper we analyze the numerical approximation of diffusion problems over polyhedral domains in R d (d = 1, 2, 3), with diffusion coefficient a(x, ω) given as a lognormal random field, i.e., a(x, ω) = exp(Z (x, ω)) where x is the spatial variable and Z (x, ·) is a Gaussian random field. The analysis presents particular challenges since the corresponding bilinear form is not uniformly bounded away from 0 or ∞ over all possible realizations of a. Focusing on the problem of computing the expected value of linear functionals of the solution of the diffusion problem, we give a rigorous error analysis for methods constructed from (1) standard continuous and piecewise linear finite element approximation in physical space; (2) truncated Karhunen-Loève expansion for computing realizations of a (leading to a possibly high-dimensional parametrized deterministic diffusion problem); and (3) lattice-based quasi-Monte Carlo (QMC) quadrature rules for computing integrals over parameter space which define the expected values. The paper contains novel error analysis which accounts for the effect of all three types of approximation. The QMC analysis is based
Machine learning (ML) is a form of artificial intelligence which is placed to transform the twenty-first century. Rapid, recent progress in its underlying architecture and algorithms and growth in the size of datasets have led to increasing computer competence across a range of fields. These include driving a vehicle, language translation, chatbots and beyond human performance at complex board games such as Go. Here, we review the fundamentals and algorithms behind machine learning and highlight specific approaches to learning and optimisation. We then summarise the applications of ML to medicine. In particular, we showcase recent diagnostic performances, and caveats, in the fields of dermatology, radiology, pathology and general microscopy.
We consider the problem of optimal recovery of an unknown function u in a Hilbert space V from measurements of the form j (u), j = 1, . . . , m, where the j are known linear functionals on V . We are motivated by the setting where u is a solution to a PDE with some unknown parameters, therefore lying on a certain manifold contained in V . Following the approach adopted in [12,5], the prior on the unknown function can be described in terms of its approximability by finite-dimensional reduced model spaces (V n ) n≥1 where dim(V n ) = n. Examples of such spaces include classical approximation spaces, e.g. finite elements or trigonometric polynomials, as well as reduced basis spaces which are designed to match the solution manifold more closely. The error bounds for optimal recovery under such priors are of the form µ(V n , W m )ε n , where ε n is the accuracy of the reduced model V n and µ(V n , W m ) is the inverse of an inf-sup constant that describe the angle between V n and the space W m spanned by the Riesz representers of ( 1 , . . . , m ). This paper addresses the problem of properly selecting the measurement functionals, in order to control at best the stability constant µ(V n , W m ), for a given reduced model space V n . Assuming that the j can be picked from a given dictionary D we introduce and analyze greedy algorithms that perform a sub-optimal selection in reasonable computational time. We study the particular case of dictionaries that consist either of point value evaluations or local averages, as idealized models for sensors in physical systems. Our theoretical analysis and greedy algorithms may therefore be used in order to optimize the position of such sensors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.