Thermodynamic properties of liquid water as well as hexagonal (Ih) and cubic (Ic) ice are predicted based on density functional theory at the hybrid-functional level, rigorously taking into account quantum nuclear motion, anharmonic fluctuations and proton disorder. This is made possible by combining advanced free energy methods and state-of-the-art machine learning techniques. The ab initio description leads to structural properties in excellent agreement with experiments, and reliable estimates of the melting points of light and heavy water. We observe that nuclear quantum effects contribute a crucial 0.2 meV/H2O to the stability of ice Ih, making it more stable than ice Ic. Our computational approach is general and transferable, providing a comprehensive framework for quantitative predictions of ab initio thermodynamic properties using machine learning potentials as an intermediate step.
Machine learning models are poised to make a transformative impact on chemical sciences by dramatically accelerating computational algorithms and amplifying insights available from computational chemistry methods. However, achieving this requires a confluence and coaction of expertise in computer science and physical sciences. This Review is written for new and experienced researchers working at the intersection of both fields. We first provide concise tutorials of computational chemistry and machine learning methods, showing how insights involving both can be achieved. We follow with a critical review of noteworthy applications that demonstrate how computational chemistry and machine learning can be used together to provide insightful (and useful) predictions in molecular and materials modeling, retrosyntheses, catalysis, and drug design.
Progress in the atomic-scale modelling of matter over the past decade has been tremendous. This progress has been brought about by improvements in methods for evaluating interatomic forces that work by either solving the electronic structure problem explicitly, or by computing accurate approximations of the solution and by the development of techniques that use the Born-Oppenheimer (BO) forces to move the atoms on the BO potential energy surface. As a consequence of these developments it is now possible to identify stable or metastable states, to sample configurations consistent with the appropriate thermodynamic ensemble, and to estimate the kinetics of reactions and phase transitions. All too often, however, progress is slowed down by the bottleneck associated with implementing new optimization algorithms and/or sampling techniques into the many existing electronic-structure and empirical-potential codes. To address this problem, we are thus releasing a new version of the i-PI software. This piece of software is an easily extensible framework for implementing advanced atomistic simulation techniques using interatomic potentials and forces calculated by an external driver code. While the original version of the code[1] was developed with a focus on path integral molecular dynamics techniques, this second release of i-PI not only includes several new advanced path integral methods, but also offers other classes of algorithms. In other words, i-PI is moving towards becoming a universal force engine that is both modular and tightly coupled to the driver codes that evaluate the potential energy surface and its derivatives.
The visualization of data is indispensable in scientific research, from the early stages when human insight forms, to the final step of communicating results. In computational physics, 1 chemistry and materials science, it can be as simple as making a scatter plot, or as straightforward as looking through the snapshots of atomic positions manually. However, as a result of the "big data" revolution these conventional approaches are often inadequate. The widespread adoption of high-throughput computation for materials discovery and the associated community-wide repositories have given rise to data sets that contain an enormous number of compounds and atomic configurations. A typical data set contains thousands to millions of atomic structures, along with a diverse range of properties such as formation energies, band gaps, or bio-activities.It would thus be desirable to have a data-driven and automated framework for visualizing and analyzing such structural datasets. The key idea is to construct a low-dimensional representation of the data, which facilitates navigation, reveals underlying patterns, and helps to identify data points with unusual attributes. Such data-intensive maps, often employing machine learning methods, are appearing more and more frequently in the literature. However, to the wider community, it is not always transparent how these maps are made and how they should be interpreted. Furthermore, while these maps undoubtedly serve a decorative purpose in academic publications, it is not always apparent what extra information can be garnered from reading or making them. This Account attempts to answer such questions. We start with a concise summary of the theory of representing chemical environments, followed by the introduction of a simple yet practical conceptual approach for generating structure maps in a generic and automated manner. Such analysis and mapping is made nearly effortless by employing the newly developed software tool, ASAP. To showcase the applicability to a wide variety of systems in chemistry and materials science, we provide several illustrative examples, including crystalline and amorphous materials, interfaces, and organic molecules. In these examples, the maps not only help to sift through large datasets, but also reveal hidden patterns that could be easily missed using conventional analyses.The explosion in the amount of computed information in chemistry and materials science 2 has made visualization into a science in itself. Not only have we benefited from exploiting these visualization methods in previous works, we also believe that the automated mapping of datasets will in turn stimulate further creativity and exploration, as well as ultimately feed back into future advances in the respective fields.
The properties of the interface between solid and melt are key to solidification and melting, as the interfacial free energy introduces a kinetic barrier to phase transitions. This makes solidification happen below the melting temperature, in out-of-equilibrium conditions at which the interfacial free energy is ill-defined. Here we draw a connection between the atomistic description of a diffuse solidliquid interface and its thermodynamic characterization. This framework resolves the ambiguities in defining the solid-liquid interfacial free energy above and below the melting temperature. In addition, we introduce a simulation protocol that allows solid-liquid interfaces to be reversibly created and destroyed at conditions relevant for experiments. We directly evaluate the value of the interfacial free energy away from the melting point for a simple but realistic atomic potential, and find a more complex temperature dependence than the constant positive slope that has been generally assumed based on phenomenological considerations and that has been used to interpret experiments. This methodology could be easily extended to the study of other phase transitions, from condensation to precipitation. Our analysis can help reconcile the textbook picture of classical nucleation theory with the growing body of atomistic studies and mesoscale models of solidification.Solidification underlies many natural phenomena such as the freezing of water in clouds and the formation of igneous rock. It is also crucial for various critical technologies including commercial casting, soldering and additive manufacturing [1][2][3]. Computational and experimental studies of these phenomena are complicated by the fact that homogeneous nucleation of solids often occurs at temperatures well below the melting point T m [4]. The free energy change associated with the formation of a solid nucleus containing n s particles is usually written as G(n s ) = µ sl n s + γ sl A(n s ).(1)In this expression the first bulk term stems from the difference in chemical potential between the solid and the liquid µ sl = µ s − µ l , which is negative below T m . The second term describes the penalty associated with the interface between the two phases, and introduces a kinetic barrier to nucleation. This surface term is the product of interfacial free energy γ sl , and the extensive surface area A(n s ). However, due to the diffuse nature of the interface, there is a degree of ambiguity in the location and area of the dividing surface between phases. In classical nucleation theory (CNT), an infinitesimally-thin dividing surface divides the solid nucleus from the surrounding liquid. These two phases are usually taken to have their bulk densities so the surface area of the solid nucleus can be calculated using A(n s ) = σn 2/3 s , where σ is a constant that depends on the shape, e.g. σ = (36π) 1/3 v 2/3 s for a spherical nucleus with bulk solid molar volume v s . Under these assumptions, the rate of nucleation can be estimated by calculating the free-energy barrier...
One of the most prominent consequences of the quantum nature of light atomic nuclei is that their kinetic energy does not follow a Maxwell-Boltzmann distribution. Deep inelastic neutron scattering (DINS) experiments can measure this effect. Thus, the nuclear quantum kinetic energy can be probed directly in both ordered and disordered samples. However, the relation between the quantum kinetic energy and the atomic environment is a very indirect one, and cross-validation with theoretical modeling is therefore urgently needed. Here, we use state of the art path integral molecular dynamics techniques to compute the kinetic energy of hydrogen and oxygen nuclei in liquid, solid, and gas-phase water close to the triple point, comparing three different interatomic potentials and validating our results against equilibrium isotope fractionation measurements. We will then show how accurate simulations can draw a link between extremely precise fractionation experiments and DINS, therefore establishing a reliable benchmark for future measurements and providing key insights to increase further the accuracy of interatomic potentials for water.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.