An extension of the infomax algorithm of Bell and Sejnowski (1995) ispresented that is able blindly to separate mixed signals with sub-and supergaussian source distributions. This was achieved by using a simple type of learning rule first derived by Girolami (1997) by choosing negentropy as a projection pursuit index. Parameterized probability distributions that have sub-and supergaussian regimes were used to derive a general learning rule that preserves the simple architecture proposed by Bell and Sejnowski (1995), is optimized using the natural gradient by Amari (1998), and uses the stability analysis of Cardoso and Laheld (1996) to switch between sub-and supergaussian regimes. We demonstrate that the extended infomax algorithm is able to separate 20 sources with a variety of source distributions easily. Applied to high-dimensional data from electroencephalographic recordings, it is effective at separating artifacts such as eye blinks and line noise from weaker electrical signals that arise from sources in the brain.
Eye movements, eye blinks, cardiac signals, muscle noise, and line noise present serious problems for electroencephalographic (EEG) interpretation and analysis when rejecting contaminated EEG segments results in an unacceptable data loss. Many methods have been proposed to remove artifacts from EEG recordings, especially those arising from eye movements and blinks. Often regression in the time or frequency domain is performed on parallel EEG and electrooculographic (EOG) recordings to derive parameters characterizing the appearance and spread of EOG artifacts in the EEG channels. Because EEG and ocular activity mix bidirectionally, regressing out eye artifacts inevitably involves subtracting relevant EEG signals from each record as well. Regression methods become even more problematic when a good regressing channel is not available for each artifact source, as in the case of muscle artifacts. Use of principal component analysis (PCA) has been proposed to remove eye artifacts from multichannel EEG. However, PCA cannot completely separate eye artifacts from brain signals, especially when they have comparable amplitudes. Here, we propose a new and generally applicable method for removing a wide variety of artifacts from EEG records based on blind source separation by independent component analysis (ICA). Our results on EEG data collected from normal and autistic subjects show that ICA can effectively detect, separate, and remove contamination from a wide variety of artifactual sources in EEG records with results comparing favorably with those obtained using regression and PCA methods. ICA can also be used to analyze blink‐related brain activity.
Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial "25 words or less"), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations.Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an over-complete
A method is given for determining the time course and spatial extent of consistently and transiently taskrelated activations from other physiological and artifactual components that contribute to functional MRI (fMRI) recordings. Independent component analysis (ICA) was used to analyze two fMRI data sets from a subject performing 6-min trials composed of alternating 40-sec Stroop color-naming and control task blocks. Each component consisted of a fixed threedimensional spatial distribution of brain voxel values (a ''map'') and an associated time course of activation. For each trial, the algorithm detected, without a priori knowledge of their spatial or temporal structure, one consistently task-related component activated during each Stroop task block, plus several transiently task-related components activated at the onset of one or two of the Stroop task blocks only. Activation patterns occurring during only part of the fMRI trial are not observed with other techniques, because their time courses cannot easily be known in advance. Other ICA components were related to physiological pulsations, head movements, or machine noise. By using higherorder statistics to specify stricter criteria for spatial independence between component maps, ICA produced improved estimates of the temporal and spatial extent of task-related activation in our data compared with principal component analysis (PCA). ICA appears to be a promising tool for exploratory analysis of fMRI data, particularly when the time courses of activation are not known in advance.Univariate methods for the analysis of functional MRI (fMRI) data typically examine each brain volume element or voxel individually, to determine whether the activity level at that voxel reaches a prespecified criterion for task-related activity. A common criterion is a predetermined level of significance for a statistic, such as the Student t (1) or KolmogorovSmirnov (2) statistic, under the null hypothesis that the distribution of a voxel's values during the behavioral control task is identical to that during performance of the experimental task(s). Correlational analysis (3) determines whether the similarity between a voxel's time course and a prediction of the task-related modulation, the reference function, exceeds a specified threshold. These methods then assemble individually selected (or ''active'') voxels, ignoring statistical relationships between voxels, to create a spatially distributed map demonstrating areas of significant activation.To enhance the statistical power of standard analysis techniques based on correlation or univariate statistical tests, fMRI experimenters often use alternating task-block designs in which the subject performs two or more tasks successively in alternating 20-to 40-sec blocks. By averaging over a number of task-block cycles, small consistently task-related (CTR) differences in hemodynamic activation can be detected. Isolated stimulus paradigms, such as that employed by Buckner et al. (4), avoid overlapping hemodynamic responses produc...
In this letter, we discuss the multivariate Laplace probability model in the context of a normal variance mixture model. We briefly review the derivation of the probability density function (pdf) and discuss a few important properties. We then present two methods for estimating its parameters from data and include an example of usage, where we apply the model to represent the statistics of the discrete Fourier transform coefficients of a speech signal. Since the pdf is given in closed form, and the model parameters can be easily obtained, this distribution may be useful for representing multivariate, sparsely distributed data, with mutually dependent components.Index Terms-Multidimensional Laplace distribution, multivariate Laplace distribution, normal variance mixture model, scale mixture of Gaussians model, statistical modeling.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.