We propose an image deconvolution algorithm when the data is contaminated by Poisson noise. The image to restore is assumed to be sparsely represented in a dictionary of waveforms such as the wavelet or curvelet transforms.Our key contributions are: First, we handle the Poisson noise properly by using the Anscombe variance stabilizing transform leading to a non-linear degradation equation with additive Gaussian noise. Second, the deconvolution problem is formulated as the minimization of a convex functional with a data-fidelity term reflecting the noise properties, and a non-smooth sparsity-promoting penalty over the image representation coefficients (e.g. ℓ 1 -norm).An additional term is also included in the functional to ensure positivity of the restored image. Third, a fast iterative forward-backward splitting algorithm is proposed to solve the minimization problem. We derive existence and uniqueness conditions of the solution, and establish convergence of the iterative algorithm. Finally, a GCV-based model selection procedure is proposed to objectively select the regularization parameter. Experimental results are carried out to show the striking benefits gained from taking into account the Poisson statistics of the noise. These results also suggest that using sparse-domain regularization may be tractable in many deconvolution applications with Poisson noise such as astronomy and microscopy.
Context. One of the main challenges of modern cosmology is to understand the nature of the mysterious dark energy that causes the cosmic acceleration. The integrated Sachs-Wolfe (ISW) effect is sensitive to dark energy, and if detected in a universe where modified gravity and curvature are excluded, presents an independent signature of dark energy. The ISW effect occurs on large scales where cosmic variance is high and where owing to the Galactic confusion we lack large amounts of data in the CMB as well as large-scale structure maps. Moreover, existing methods in the literature often make strong assumptions about the statistics of the underlying fields or estimators. Together these effects can severely limit signal extraction. Aims. We aim to define an optimal statistical method for detecting the ISW effect that can handle large areas of missing data and minimise the number of underlying assumptions made about the data and estimators. Methods. We first review current detections (and non-detections) of the ISW effect, comparing statistical subtleties between existing methods, and identifying several limitations. We propose a novel method to detect and measure the ISW signal. This method assumes only that the primordial CMB field is Gaussian. It is based on a sparse inpainting method to reconstruct missing data and uses a bootstrap technique to avoid assumptions about the statistics of the estimator. It is a complete method, which uses three complementary statistical methods. Results. We apply our method to Euclid-like simulations and show we can expect a ∼7σ model-independent detection of the ISW signal with WMAP7-like data, even when considering missing data. Other tests return ∼5σ detection levels for a Euclid-like survey. We find that detection levels are independent from whether the galaxy field is normally or lognormally distributed. We apply our method to the 2 Micron All Sky Survey (2MASS) and WMAP7 CMB data and find detections in the 1.0−1.2σ range, as expected from our simulations. As a by-product, we have also reconstructed the full-sky temperature ISW field from the 2MASS data.Conclusions. We present a novel technique based on sparse inpainting and bootstrapping, which accurately detects and reconstructs the ISW effect.
Context. Although there is currently a debate over the significance of the claimed large-scale anomalies in the cosmic microwave background (CMB), their existence is not totally dismissed. In parallel to the debate over their statistical significance, recent work has also focussed on masks and secondary anisotropies as potential sources of these anomalies. Aims. In this work we investigate simultaneously the impact of the method used to account for masked regions as well as the impact of the integrated Sachs-Wolfe (ISW) effect, which is the large-scale secondary anisotropy most likely to affect the CMB anomalies. In this sense, our work is an update of previous works. Our aim is to identify trends in CMB data from different years and with different mask treatments. Methods. We reconstruct the ISW signal due to 2 Micron All-Sky Survey (2MASS) and NRAO VLA Sky Survey (NVSS) galaxies, effectively reconstructing the low-redshift ISW signal out to z ∼ 1. We account for regions of missing data using the sparse inpainting technique. We test sparse inpainting of the CMB, large scale structure and ISW and find that it constitutes a bias-free reconstruction method suitable to study large-scale statistical isotropy and the ISW effect. Results. We focus on three large-scale CMB anomalies: the low quadrupole, the quadrupole/octopole alignment, and the octopole planarity. After sparse inpainting, the low quadrupole becomes more anomalous, whilst the quadrupole/octopole alignment becomes less anomalous. The significance of the low quadrupole is unchanged after subtraction of the ISW effect, while the trend amongst the CMB maps is that both the low quadrupole and the quadrupole/octopole alignment have reduced significance, yet other hypotheses remain possible as well (e.g. exotic physics). Our results also suggest that both of these anomalies may be due to the quadrupole alone. While the octopole planarity significance is reduced after inpainting and after ISW subtraction, however, we do not find that it was very anomalous to start with.
Abstract. Shape classification using graphs and skeletons usually involves edition processes in order to reduce the influence of structural noise. However, edition distances can not be readily used within the kernel machine framework as they generally lead to indefinite kernels. In this paper, we propose a graph kernel based on bags of paths and edit operations which remains positive-definite according to the bags. The robustness of this kernel is based on a selection of the paths according to their relevance in the graph. Several experiments prove the efficiency of this approach compared to alternative kernel.
Context. Weak gravitational lensing is an ideal probe of the dark universe. In recent years, several linear methods have been developed to reconstruct the density distribution in the Universe in three dimensions, making use of photometric redshift information to determine the radial distribution of lensed sources. Aims. We aim to address three key problems seen in these methods; namely, the bias in the redshifts of detected objects, the line-ofsight smearing seen in reconstructions, and the damping of the amplitude of the reconstruction relative to the underlying density. We also aim to detect structures at higher redshifts than have previously been achieved, and to improve the line-of-sight resolution of our reconstructions. Methods. We considered the problem under the framework of compressed sensing (CS). Under the assumption that the data are sparse or compressible in an appropriate dictionary, we constructed a robust estimator and employ state-of-the-art convex optimisation methods to reconstruct the density contrast. For simplicity in implementation, and as a proof of concept of our method, we reduced the problem to one dimension, considering the reconstruction along each line of sight independently. We also assumed an idealised survey in which the redshifts of sources are known. Results. Despite the loss of information inherent in our one-dimensional implementation, we demonstrate that our method is able to accurately reproduce cluster haloes up to a redshift of z cl = 1.0, deeper than state-of-the-art linear methods. We directly compare our method with these linear methods, and demonstrate minimal radial smearing and redshift bias in our reconstructions, as well as a reduced damping of the reconstruction amplitude as compared to the linear methods. In addition, the CS framework allows us to consider an underdetermined inverse problem, thereby allowing us to reconstruct the density contrast at finer resolution than the input data. Conclusions. The CS approach allows us to recover the density distribution more accurately than current state-of-the-art linear methods. Specifically, it addresses three key problem areas inherent in linear methods. Moreover, we are able to achieve superresolution and increased high-redshift sensitivity in our reconstructions.
International audienceIn this paper, we propose a Bayesian MAP estimator for solving the deconvolution problems when the observations are corrupted by Poisson noise. Towards this goal, a proper data fidelity term (log-likelihood) is introduced to reflect the Poisson statistics of the noise. On the other hand, as a prior, the images to restore are assumed to be positive and sparsely represented in a dictionary of waveforms such as wavelets or curvelets. Both analysis and synthesis-type sparsity priors are considered. Piecing together the data fidelity and the prior terms, the deconvolution problem boils down to the minimization of non-smooth convex functionals (for each prior). We establish the well-posedness of each optimization problem, characterize the corresponding minimizers, and solve them by means of proximal splitting algorithms originating from the realm of non-smooth convex optimization theory. Experimental results are conducted to demonstrate the potential applicability of the proposed algorithms to astronomical imaging datasets
In this paper, we propose two algorithms for solving linear inverse problems when the observations are corrupted by Poisson noise. A proper data fidelity term (log-likelihood) is introduced to reflect the Poisson statistics of the noise. On the other hand, as a prior, the images to restore are assumed to be positive and sparsely represented in a dictionary of waveforms. Piecing together the data fidelity and the prior terms, the solution to the inverse problem is cast as the minimization of a non-smooth convex functional. We establish the well-posedness of the optimization problem, characterize the corresponding minimizers, and solve it by means of primal and primal-dual proximal splitting algorithms originating from the field of non-smooth convex optimization theory. Experimental results on deconvolution and comparison to prior methods are also reported.
To cite this version:François-Xavier Dupé, Luc Brun. Edition within a graph kernel framework for shape recognition. Graph Based Representation in Pattern Recognition 2009, 2009, Venice, Italy. pp.11-20, 2009 Abstract. A large family of shape comparison methods is based on a medial axis transform combined with an encoding of the skeleton by a graph. Despite many qualities this encoding of shapes suffers from the non continuity of the medial axis transform. In this paper, we propose to integrate robustness against structural noise inside a graph kernel. This robustness is based on a selection of the paths according to their relevance and on path editions. This kernel is positive semi-definite and several experiments prove the efficiency of our approach compared to alternative kernels.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.