We present two classes of improved estimators for mutual information M(X,Y), from samples of random points distributed according to some joint probability density mu(x,y). In contrast to conventional estimators based on binnings, they are based on entropy estimates from k -nearest neighbor distances. This means that they are data efficient (with k=1 we resolve structures down to the smallest possible scales), adaptive (the resolution is higher where data are more numerous), and have minimal bias. Indeed, the bias of the underlying entropy estimates is mainly due to nonuniformity of the density at the smallest resolved scale, giving typically systematic errors which scale as functions of k/N for N points. Numerically, we find that both families become exact for independent distributions, i.e. the estimator M(X,Y) vanishes (up to statistical fluctuations) if mu(x,y)=mu(x)mu(y). This holds for all tested marginal distributions and for all dimensions of x and y. In addition, we give estimators for redundancies between more than two random variables. We compare our algorithms in detail with existing algorithms. Finally, we demonstrate the usefulness of our estimators for assessing the actual independence of components obtained from independent component analysis (ICA), for improving ICA, and for estimating the reliability of blind source separation.
We study the correlation exponent v introduced recently as a characteristic measure of strange attractors which allows one to distinguish between detenninistic chaos and random noise. The exponent v is closely related to the fractal dimension and the infonnation dimension, but its computation is considerably easier. Its usefulness in characterizing experimental data which stem from very high dimensional systems is stressed. Algorithms for extracting v from the time series of a single variable are proposed. The relations between the various measures of strange aUractors and between them and the Lyapunov exponents are discussed. It is shown that the conjecture of Kaplan and Yorke for the dimension gives an upper bound for v. Various examples of finite and infinite dimensional systems are treated, both numerically and analytically.
We present an algorithm for simulating flexible chain polymers. It combines the Rosenbluth-Rosenbluth method with recursive enrichment. Although it can be applied also in more general situations, it is most efficient for three-dimensional polymers on the simple-cubic lattice. There it allows high statistics simulations of chains of length up to Nϭ10 6. For storage reasons, this is feasable only for polymers in a finite volume. For free polymers in infinite volume, we present very high statistics runs with Nϭ10 000. These simulations fully agree with previous simulations made by Hegger and Grassberger ͓J. Chem. Phys. 102, 6681 ͑1995͔͒ with a similar but less efficient algorithm, showing that logarithmic corrections to mean field behavior are much stronger than predicted by field theory. But the finite volume simulations show that the density inside a collapsed globule scales with the distance from the point as predicted by mean field theory, in contrast to claims in the work mentioned above. In addition to the simple-cubic lattice, we also studied two versions of the bond fluctuation model, but with much shorter chains. Finally, we show that our method can be applied also to off-lattice models, and illustrate this with simulations of a model studied in detail by Freire et al. ͓Macromolecules 19, 452 ͑1986͒ and later work͔. ͓S1063-651X͑97͒10308-7͔
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.