Context. Recent years have been seeing huge developments of radio telescopes and a tremendous increase in their capabilities (sensitivity, angular and spectral resolution, field of view, etc.). Such systems make designing more sophisticated techniques mandatory not only for transporting, storing, and processing this new generation of radio interferometric data, but also for restoring the astrophysical information contained in such data. Aims. In this paper we present a new radio deconvolution algorithm named MORESANE and its application to fully realistic simulated data of MeerKAT, one of the SKA precursors. This method has been designed for the difficult case of restoring diffuse astronomical sources that are faint in brightness, complex in morphology, and possibly buried in the dirty beam's side lobes of bright radio sources in the field. Methods. MORESANE is a greedy algorithm that combines complementary types of sparse recovery methods in order to reconstruct the most appropriate sky model from observed radio visibilities. A synthesis approach is used for reconstructing images, in which the synthesis atoms representing the unknown sources are learned using analysis priors. We applied this new deconvolution method to fully realistic simulations of the radio observations of a galaxy cluster and of an HII region in M 31. Results. We show that MORESANE is able to efficiently reconstruct images composed of a wide variety of sources (compact pointlike objects, extended tailed radio galaxies, low-surface brightness emission) from radio interferometric data. Comparisons with the state of the art algorithms indicate that MORESANE provides competitive results in terms of both the total flux/surface brightness conservation and fidelity of the reconstructed model. MORESANE seems particularly well suited to recovering diffuse and extended sources, as well as bright and compact radio sources known to be hosted in galaxy clusters.
Next generation radio-interferometers, like the Square Kilometre Array, will acquire large amounts of data with the goal of improving the size and sensitivity of the reconstructed images by orders of magnitude. The efficient processing of large-scale data sets is of great importance. We propose an acceleration strategy for a recently proposed primal-dual distributed algorithm. A preconditioning approach can incorporate into the algorithmic structure both the sampling density of the measured visibilities and the noise statistics. Using the sampling density information greatly accelerates the convergence speed, especially for highly non-uniform sampling patterns, while relying on the correct noise statistics optimises the sensitivity of the reconstruction. In connection to clean, our approach can be seen as including in the same algorithmic structure both natural and uniform weighting, thereby simultaneously optimising both the resolution and the sensitivity. The method relies on a new non-Euclidean proximity operator for the data fidelity term, that generalises the projection onto the 2 ball where the noise lives for naturally weighted data, to the projection onto a generalised ellipsoid incorporating sampling density information through uniform weighting. Importantly, this non-Euclidean modification is only an acceleration strategy to solve the convex imaging problem with data fidelity dictated only by noise statistics. We show through simulations with realistic sampling patterns the acceleration obtained using the preconditioning. We also investigate the algorithm performance for the reconstruction of the 3C129 radio galaxy from real visibilities and compare with multi-scale clean, showing better sensitivity and resolution. Our matlab code is available online on GitHub.
We propose a new approach within the versatile framework of convex optimization to solve the radio-interferometric wideband imaging problem. Our approach, dubbed HyperSARA, solves a sequence of weighted nuclear norm and 2,1 minimization problems promoting low rankness and joint average sparsity of the wideband model cube. On the one hand, enforcing low rankness enhances the overall resolution of the reconstructed model cube by exploiting the correlation between the different channels. On the other hand, promoting joint average sparsity improves the overall sensitivity by rejecting artefacts present on the different channels. An adaptive Preconditioned Primal-Dual algorithm is adopted to solve the minimization problem. The algorithmic structure is highly scalable to large data sets and allows for imaging in the presence of unknown noise levels and calibration errors. We showcase the superior performance of the proposed approach, reflected in high-resolution images on simulations and real VLA observations with respect to single channel imaging and the clean-based wideband imaging algorithm in the wsclean software. Our matlab code is available online on github.
We leverage the Sparsity Averaging Reweighted Analysis (SARA) approach for interferometric imaging, that is based on convex optimisation, for the super-resolution of Cyg A from observations at the frequencies 8.422GHz and 6.678GHz with the Karl G. Jansky Very Large Array (VLA). The associated average sparsity and positivity priors enable image reconstruction beyond instrumental resolution. An adaptive Preconditioned Primal-Dual algorithmic structure is developed for imaging in the presence of unknown noise levels and calibration errors. We demonstrate the superior performance of the algorithm with respect to the conventional clean-based methods, reflected in super-resolved images with high fidelity. The high resolution features of the recovered images are validated by referring to maps of Cyg A at higher frequencies, more precisely 17.324GHz and 14.252GHz. We also confirm the recent discovery of a radio transient in Cyg A, revealed in the recovered images of the investigated data sets. Our matlab code is available online on GitHub.
In the lead-up to the Square Kilometre Array (SKA) project, several next-generation radio telescopes and upgrades are already being built around the world. These include APERTIF (The Netherlands), ASKAP (Australia), e-MERLIN
We introduce the first AI-based framework for deep, super-resolution, wide-field radio interferometric imaging and demonstrate it on observations of the ESO 137-006 radio galaxy. The algorithmic framework to solve the inverse problem for image reconstruction builds on a recent “plug-and-play” scheme whereby a denoising operator is injected as an image regularizer in an optimization algorithm, which alternates until convergence between denoising steps and gradient-descent data fidelity steps. We investigate handcrafted and learned variants of high-resolution, high dynamic range denoisers. We propose a parallel algorithm implementation relying on automated decompositions of the image into facets and the measurement operator into sparse low-dimensional blocks, enabling scalability to large data and image dimensions. We validate our framework for image formation at a wide field of view containing ESO 137-006 from 19 GB of MeerKAT data at 1053 and 1399 MHz. The recovered maps exhibit significantly more resolution and dynamic range than CLEAN, revealing collimated synchrotron threads close to the galactic core.
Radio interferometric (RI) data are noisy under-sampled spatial Fourier components of the unknown radio sky affected by direction-dependent antenna gains. Failure to model these antenna gains accurately results in a radio sky estimate with limited fidelity and resolution. The RI inverse problem has been recently addressed via a joint calibration and imaging approach which consists in solving a non-convex minimisation task, involving suitable priors for the DDEs, namely temporal and spatial smoothness, and sparsity for the unknown radio map via an ℓ1-norm prior, in the context of realistic RI simulations. Building on these developments, we propose to promote sparsity of the radio map via a log-sum prior, enforcing sparsity more strongly than the ℓ1-norm. The resulting minimisation task is addressed via a sequence of non-convex minimisation tasks composed of re-weighted ℓ1 image priors, which are solved approximately. We demonstrate the efficiency of the approach on RI observations of the celebrated radio galaxy Cygnus A obtained with the Karl G. Jansky Very Large Array at X, C, and S bands. More precisely, we showcase that the approach enhances data fidelity significantly while achieving high resolution high dynamic range radio maps, confirming the suitability of the priors considered for the unknown DDEs and radio image. As a clear qualitative indication of the high fidelity achieved by the data and the proposed approach, we report the detection of three background sources in the vicinity of Cyg A, at S band.
Upcoming radio interferometers are aiming to image the sky at new levels of resolution and sensitivity, with wide-band image cubes reaching close to the Petabyte scale for SKA. Modern proximal optimization algorithms have shown a potential to significantly outperform CLEAN thanks to their ability to inject complex image models to regularize the inverse problem for image formation from visibility data. They were also shown to be scalable to large data volumes thanks to a splitting functionality enabling the decomposition of data into blocks, for parallel processing of block-specific data-fidelity terms of the objective function. In this work, the splitting functionality is further exploited to decompose the image cube into spatio-spectral facets, and enable parallel processing of facet-specific regularization terms in the objective. The resulting "Faceted HyperSARA" algorithm is implemented in MATLAB (code available on GitHub). Simulation results on synthetic image cubes confirm that faceting can provide a major increase in scalability at no cost in imaging quality. A proof-of-concept reconstruction of a 15 GB image of Cyg A from 7.4 GB of VLA data, utilizing 496 CPU cores on a HPC system for 68 hours, confirms both scalability and a quantum jump in imaging quality from CLEAN. Assuming slow spectral slope of Cyg A, we also demonstrate that Faceted HyperSARA can be combined with a dimensionality reduction technique, enabling utilizing only 31 CPU cores for 142 hours to form the Cyg A image from the same data, while preserving reconstruction quality. Cyg A reconstructed cubes are available online.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.