Neural networks provide an efficient, general interpolation method for nonlinear functions of several variables. This paper describes the use of feed-forward neural networks to model global properties of potential energy surfaces from information available at a limited number of configurations. As an initial demonstration of the method, several fits are made to data derived from an empirical potential model of CO adsorbed on Ni(111). The data are error-free and geometries are selected from uniform grids of two and three dimensions. The neural network model predicts the potential to within a few hundredths of a kcal/mole at arbitrary geometries. The accuracy and efficiency of the neural network in practical calculations are demonstrated in quantum transition state theory rate calculations for surface diffusion of CO/Ni(111) using a Monte Carlo/path integral method. The network model is much faster to evaluate than the original potential from which it is derived. As a more complex test of the method, the interaction potential of H2 with the Si(100)-2×1 surface is determined as a function of 12 degrees of freedom from energies calculated with the local density functional method at 750 geometries. The training examples are not uniformly spaced and they depend weakly on variables not included in the fit. The neural net model predicts the potential at geometries outside the training set with a mean absolute deviation of 2.1 kcal/mole.
In this tutorial, traditional decision tree construction and the current state of decision tree modeling are reviewed. Emphasis is placed on techniques that make decision trees well suited to handle the complexities of chemical and biochemical applications.
SUMMARYFinding methods for the optimization of weights in feedforward neural networks has become an ongoing developmental process in connectionist research. The current focus on finding new methods for the optimization of weights is mostly the result of the slow and unreliable convergence properties of the gradient descent optimization used in the original back-propagation algorithm. More accurate and computationally expensive second-order gradient methods have displaced earlier first-order gradient optimization of the network connection weights. The global, extended Kalman filter is among the most accurate and computationally expensive of these second-order weight optimization methods. The iterative, second-order nature of the filter results in a large number of calculations for each sweep of the training set. This can increase the training time dramatically when training is conducted with data sets that contain large numbers of training patterns. In this paper an adaptive variant of the global, extended Kalman filter that exhibits substantially improved convergence properties is presented and discussed. The adaptive mechanism permits more rapid convergence of network training by identifying data that contain redundant information and avoiding calculations based on this redundant information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.