A unified view of the area of sparse signal processing is presented in tutorial form by bringing together various fields in which the property of sparsity has been successfully exploited. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common potential benefits of significant reduction in sampling rate and processing manipulations through sparse signal processing are revealed. The key application domains of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of the sampling process and reconstruction algorithms, linkages are made with random sampling, compressed sensing, and rate of innovation. The redundancy introduced by channel coding in finite and real Galois fields is then related to over-sampling with similar reconstruction algorithms. The error locator polynomial (ELP) and iterative methods are shown to work quite effectively for both sampling and coding applications. The methods of Prony, Pisarenko, and MUltiple SIgnal Classification (MUSIC) are next shown to be targeted at analyzing signals with sparse frequency domain representations. Specifically, the relations of the approach of Prony to an annihilating filter in rate of innovation and ELP in coding are emphasized; the Pisarenko and MUSIC methods are further improvements of the Prony method under noisy environments. The iterative methods developed for sampling and coding applications are shown to be powerful tools in spectral estimation. Such narrowband spectral estimation is then related to multi-source location and direction of arrival estimation in array processing. Sparsity in unobservable source signals is also shown to facilitate source separation in sparse component analysis; the algorithms developed in this area such as linear programming and matching pursuit are also widely used in compressed sensing. Finally, the multipath channel estimation problem is shown to have a sparse formulation; algorithms similar to sampling and coding are used to estimate typical multicarrier communication channels.
In this correspondence, we focus on the performance analysis of the widely-used minimum description length (MDL) source enumeration technique in array processing. Unfortunately, available theoretical analysis exhibit deviation from the simulation results. We present an accurate and insightful performance analysis for the probability of missed detection. We also show that the statistical performance of the MDL is approximately the same under both deterministic and stochastic signal models. Simulation results show the superiority of the proposed analysis over available results. Index Terms-Minimum description length (MDL), source enumeration, performance analysis, deterministic signal. EDICS Category: SAM-PERF, SAM-SDET I. INTRODUCTION AND PRELIMINARIES MDL [1], is one of the most successful methods for determining the number of present signals in array processing and channel order detection [2]. MDL is a low complexity information theoretic criteria which does not need any subjective threshold setting usual in detection theoretic criteria. Other statistical properties, specially its asymptotic consistency [1], makes it a favorable choice for source enumeration. Unfortunately, only few approximate finite-sample performance analysis are available on the MDL method [3]-[8]. In [3], a simple asymptotic statistical model for the eigenvalues of the sample correlation matrix was used. Unfortunately, the theoretical results showed persistent bias from the simulation results [4].The next work [5], gives a computational approach for calculation of the probability of false alarm p f a . In calculating the probability of missed detection pm, the same inaccurate statistical model is used as in [3]. In [6], instead of exact performance estimation, theoretical bounds for performance were presented. A qualitative performance evaluation in terms of gap between noise and signal eigenvalues and also the dispersion of each group is given in [7]. In a recent work [8], a significantly different approach was used. Our simulation results show improved results of [8] in comparison with [3]. The performance analysis was generalized to the non-Gaussian signals while it was shown that the results reduce to the results of [5], [6] in Gaussian signals. We will show that the same modelling errors have degraded the analysis in [8] as in [3]- [6].In this correspondence, we use an approach very similar to [3]-[5] to estimate pm, including in the analysis the finite sample O(n −1 ) biases of the eigenvalues. The noise subspace eigenvalue spread is taken into account which prevents the signal subspace eigenvalues to approach σ 2 , the noise variance. The bias of the noise power estimator in MDL is calculated to get excellent match between theoretical and simulation results. We will not calculate p f a which is negligible.In the previous works, only the case of stochastic signal has been considered. Here, we use a perturbation analysis to calculate biases and variances of the eigenvalues under deterministic signal, too. Using these results, we show t...
A unified view of the area of sparse signal processing is presented in tutorial form by bringing together various fields in which the property of sparsity has been successfully exploited. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common potential benefits of significant reduction in sampling rate and processing manipulations through sparse signal processing are revealed. The key application domains of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of the sampling process and reconstruction algorithms, linkages are made with random sampling, compressed sensing, and rate of innovation. The redundancy introduced by channel coding in finite and real Galois fields is then related to over-sampling with similar reconstruction algorithms. The error locator polynomial (ELP) and iterative methods are shown to work quite effectively for both sampling and coding applications. The methods of Prony, Pisarenko, and MUltiple SIgnal Classification (MUSIC) are next shown to be targeted at analyzing signals with sparse frequency domain representations. Specifically, the relations of the approach of Prony to an annihilating filter in rate of innovation and ELP in coding are emphasized; the Pisarenko and MUSIC methods are further improvements of the Prony method under noisy environments. The iterative methods developed for sampling and coding applications are shown to be powerful tools in spectral estimation. Such narrowband spectral estimation is then related to multi-source location and direction of arrival estimation in array processing. Sparsity in unobservable source signals is also shown to facilitate source separation in sparse component analysis; the algorithms developed in this area such as linear programming and matching pursuit are also widely used in compressed sensing. Finally, the multipath channel estimation problem is shown to have a sparse formulation; algorithms similar to sampling and coding are used to estimate typical multicarrier communication channels.
Abstract-signal direction-of-arrival estimation using an array of sensors has been the subject of intensive research and development during the last two decades. Efforts have been directed to both, better solutions for the general data model and to develop more realistic models. So far, many authors have assumed the data to be iid samples of a multivariate statistical model. Although this assumption reduces the complexity of the model, it may not be true in certain situations where signals show temporal correlation. Some results are available on the temporally correlated signal model in the literature. The temporally correlated stochastic Cramer-Rao bound (CRB) has been calculated and an instrumental variable-based method called IV-SSF is introduced. Also, it has been shown that temporally correlated CRB is lower bounded by the deterministic CRB. In this paper, we show that temporally correlated CRB is also upper bounded by the stochastic iid CRB. We investigate the effect of temporal correlation of the signals on the best achievable performance. We also show that the IV-SSF method is not efficient and based on an analysis of the CRB, propose a variation in the method which boosts its performance. Simulation results show the improved performance of the proposed method in terms of lower bias and error variance.
Abstract-This letter presents the sparse vector signal detection from one bit compressed sensing measurements, in contrast to the previous works which deal with scalar signal detection. In this letter, available results are extended to the vector case and the GLRT detector and the optimal quantizer design are obtained. Also, a double-detector scheme is introduced in which a sensor level threshold detector is integrated into network level GLRT to improve the performance. The detection criteria of oracle and clairvoyant detectors are also derived. Simulation results show that with careful design of the threshold detector, the overall detection performance of double-detector scheme would be better than the sign-GLRT proposed in [1] and close to oracle and clairvoyant detectors. Also, the proposed detector is applied to spectrum sensing and the results are near the well known energy detector which uses the real valued data while the proposed detector only uses the sign of the data.
In this paper, we address the problem of recovering point sources from two dimensional low-pass measurements, which is known as super-resolution problem. This is the fundamental concern of many applications such as electronic imaging, optics, microscopy, and line spectral estimation. We assume that the point sources are located in the square [0, 1] 2 with unknown locations and complex amplitudes. The only available information is low-pass Fourier measurements band-limited to integer square [−fc, fc] 2 . The signal is estimated by minimizing Total Variation (TV) norm, which leads to a convex optimization problem. It is shown that if the sources are separated by at least 1.68/fc, there exist a dual certificate that is sufficient for exact recovery.
A challenging issue for echocardiographic image interpretation is the accurate analysis of small transient motions of myocardium and valves during real-time visualization. A higher frame rate video may reduce this difficulty, and temporal super resolution (TSR) is useful for illustrating the fast-moving structures. In this paper, we introduce a novel framework that optimizes TSR enhancement of echocardiographic images by utilizing temporal information and sparse representation. The goal of this method is to increase the frame rate of echocardiographic videos, and therefore enable more accurate analyses of moving structures. For the proposed method, we first derived temporal information by extracting intensity variation time curves (IVTCs) assessed for each pixel. We then designed both low-resolution and high-resolution overcomplete dictionaries based on prior knowledge of the temporal signals and a set of prespecified known functions. The IVTCs can then be described as linear combinations of a few prototype atoms in the low-resolution dictionary. We used the Bayesian compressive sensing (BCS) sparse recovery algorithm to find the sparse coefficients of the signals. We extracted the sparse coefficients and the corresponding active atoms in the low-resolution dictionary to construct new sparse coefficients corresponding to the high-resolution dictionary. Using the estimated atoms and the high-resolution dictionary, a new IVTC with more samples was constructed. Finally, by placing the new IVTC signals in the original IVTC positions, we were able to reconstruct the original echocardiography video with more frames. The proposed method does not require training of low-resolution and high-resolution dictionaries, nor does it require motion estimation; it does not blur fast-moving objects, and does not have blocking artifacts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.