BackgroundHigh-throughput proteomics techniques, such as mass spectrometry (MS)-based approaches, produce very high-dimensional data-sets. In a clinical setting one is often interested in how mass spectra differ between patients of different classes, for example spectra from healthy patients vs. spectra from patients having a particular disease. Machine learning algorithms are needed to (a) identify these discriminating features and (b) classify unknown spectra based on this feature set. Since the acquired data is usually noisy, the algorithms should be robust against noise and outliers, while the identified feature set should be as small as possible.ResultsWe present a new algorithm, Sparse Proteomics Analysis (SPA), based on the theory of compressed sensing that allows us to identify a minimal discriminating set of features from mass spectrometry data-sets. We show (1) how our method performs on artificial and real-world data-sets, (2) that its performance is competitive with standard (and widely used) algorithms for analyzing proteomics data, and (3) that it is robust against random and systematic noise. We further demonstrate the applicability of our algorithm to two previously published clinical data-sets.Electronic supplementary materialThe online version of this article (doi:10.1186/s12859-017-1565-4) contains supplementary material, which is available to authorized users.
In Bayesian inverse problems, 'model error' refers to the discrepancy between the parameterto-observable map that generates the data and the parameter-to-observable map that is used for inference. Model error is important because it can lead to misspecified likelihoods, and thus to incorrect inference. We consider some deterministic approaches for accounting for model error in inverse problems with additive Gaussian observation noise, where the parameter-to-observable map is the composition of a possibly nonlinear parameter-to-state map or 'model' and a linear state-to-observable map or 'observation operator'. Using local Lipschitz stability estimates of posteriors with respect to likelihood perturbations, we bound the symmetrised Kullback-Leibler divergence of the posterior generated by each approach with respect to the posterior associated to the true model and the posterior associated to the wrong model. Our bounds lead to criteria for choosing observation operators that mitigate the effect of model error on the posterior.
We consider a quantity that measures the roundness of a bounded, convex d-polytope in R d . We majorise this quantity in terms of the smallest singular value of the matrix of outer unit normals to the facets of the polytope.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.