2011
DOI: 10.1063/1.3557038
|View full text |Cite
|
Sign up to set email alerts
|

Coarse-graining errors and numerical optimization using a relative entropy framework

Abstract: The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, S(rel), that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
348
0

Year Published

2011
2011
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 234 publications
(353 citation statements)
references
References 40 publications
5
348
0
Order By: Relevance
“…The functional Ω½V is convex with respect to V ðsÞ which means that this is the global minimum. Minimizing this functional is equivalent to minimizing the Kullback-Leibler divergence between the sampled distribution and the target distribution (25)(26)(27)(28). Eqs.…”
Section: Algorithmmentioning
confidence: 99%
“…The functional Ω½V is convex with respect to V ðsÞ which means that this is the global minimum. Minimizing this functional is equivalent to minimizing the Kullback-Leibler divergence between the sampled distribution and the target distribution (25)(26)(27)(28). Eqs.…”
Section: Algorithmmentioning
confidence: 99%
“…Furthermore even when it is available in the form of a Gibbs state it will require computations that will typically involve a full Hamiltonian, 7,22 .…”
Section: Continuous Time Markov Chains and Kinetic Montementioning
confidence: 99%
“…Aside of this rigorous numerical analysis direction, entropy-based computational techniques were also developed and used for constructing approximations of coarse-grained (effective) potentials for models of large biomolecules and polymeric systems (fluids, melts). Optimal parametrization of effective potentials based on minimizing the relative entropy between equilibrium Gibbs states, e.g., 3,6,7 , extended previously developed inverse Monte Carlo methods, primarily based on force matching approaches, used in coarsegraining of macromolecules (see, e.g., 30,38 ). In 13 an extension to dynamics is proposed in the context of FokkerPlanck equations, by considering the corresponding relative entropy for discrete-time approximations of the transition probabilities.…”
Section: Introductionmentioning
confidence: 99%
“…The curves are very close to each other; however there are small differences, in particular in small distances, close to the first maximum. Note that theoretically it is expected that the RE outcome, at the level of g(R), should agree with the IBI one [15]. We should report here that we have calculated the CG potential derivatives appearing in the Jacobian and Hessian in the Newton-Raphson scheme by direct sampling during the corresponding CG run.…”
Section: Watermentioning
confidence: 86%
“…Methods such as inverse Monte Carlo (IMC), direct inverse Boltzmann (DBI) and iterative inverse Boltzmann (IBI) [11,12,1], force-matching [13,14], and relative entropy minimization [15] provide optimal parameterizations of approximate coarse-grained models by considering a pre-selected set of observables and by then minimizing a cost functional over the parameter space,…”
Section: Parametrizations At Equilibrium and Potential Of Mean Forcementioning
confidence: 99%