Abstract-This paper proposes a two-phase scheme for removing salt-and-pepper impulse noise. In the first phase, an adaptive median filter is used to identify pixels which are likely to be contaminated by noise (noise candidates). In the second phase, the image is restored using a specialized regularization method that applies only to those selected noise candidates. In terms of edge preservation and noise suppression, our restored images show a significant improvement compared to those restored by using just nonlinear filters or regularization methods only. Our scheme can remove salt-and-pepper-noise with a noise level as high as 90%.Index Terms-Adaptive median filter, edge-preserving regularization, impulse noise.
In this expository paper, we survey some of the latest developments on using preconditioned conjugate gradient methods for solving Toeplitz systems. One of the main results is that the complexity of solving a large class of n-by-n Toeplitz systems is reduced to On log n operations as compared to On log 2 n operations required by fast direct Toeplitz solvers. Di erent preconditioners proposed for Toeplitz systems are reviewed. Applications to Toeplitz-related systems arising from partial di erential equations, queueing networks, signal and image processing, integral equations, and time series analysis are given.
Image inpainting is a fundamental problem in image processing and has many applications. Motivated by the recent tight frame based methods on image restoration in either the image or the transform domain, we propose an iterative tight frame algorithm for image inpainting. We consider the convergence of this framelet-based algorithm by interpreting it as an iteration for minimizing a special functional. The proof of the convergence is under the framework of convex analysis and optimization theory. We also discuss the relationship of our method with other wavelet-based methods. Numerical experiments are given to illustrate the performance of the proposed algorithm.
Blur removal is an important problem in signal and image processing. The blurring matrices obtained by using the zero boundary condition (corresponding to assuming dark background outside the scene) are Toeplitz matrices for one-dimensional problems and block-Toeplitz-Toeplitzblock matrices for two-dimensional cases. They are computationally intensive to invert especially in the block case. If the periodic boundary condition is used, the matrices become (block) circulant and can be diagonalized by discrete Fourier transform matrices. In this paper, we consider the use of the Neumann boundary condition (corresponding to a reflection of the original scene at the boundary). The resulting matrices are (block) Toeplitz-plus-Hankel matrices. We show that for symmetric blurring functions, these blurring matrices can always be diagonalized by discrete cosine transform matrices. Thus the cost of inversion is significantly lower than that of using the zero or periodic boundary conditions. We also show that the use of the Neumann boundary condition provides an easy way of estimating the regularization parameter when the generalized cross-validation is used. When the blurring function is nonsymmetric, we show that the optimal cosine transform preconditioner of the blurring matrix is equal to the blurring matrix generated by the symmetric part of the blurring function. Numerical results are given to illustrate the efficiency of using the Neumann boundary condition.
Abstract. High-resolution image reconstruction refers to the reconstruction of high-resolution images from multiple low-resolution, shifted, degraded samples of a true image. In this paper, we analyze this problem from the wavelet point of view. By expressing the true image as a function in L(R 2 ), we derive iterative algorithms which recover the function completely in the L sense from the given low-resolution functions. These algorithms decompose the function obtained from the previous iteration into different frequency components in the wavelet transform domain and add them into the new iterate to improve the approximation. We apply wavelet (packet) thresholding methods to denoise the function obtained in the previous step before adding it into the new iterate. Our numerical results show that the reconstructed images from our wavelet algorithms are better than that from the Tikhonov least-squares approach. Extension to super-resolution image reconstruction, where some of the low-resolution images are missing, is also considered.Key words. wavelet, high-resolution image reconstruction, Tikhonov least square method AMS subject classifications. 42C40, 65T60, 68U10, 94A08 PII. S10648275003831231. Introduction. Many applications in image processing require deconvolving noisy data, for example the deblurring of astronomical images [11]. The main objective in this paper is to develop algorithms for these applications using a wavelet approach. We will concentrate on one such application, namely, the high-resolution image reconstruction problem. High-resolution images are often desired in many situations, but made impossible because of hardware limitations. Increasing the resolution by image processing techniques is therefore of great importance. The earliest formulation of the problem was proposed by Tsai and Huang [24] in 1984, motivated by the need of improved resolution images from Landsat image data. Kaltenbacher and Hardie [14], and Kim, Bose, and Valenzuela [15] applied the work of [24] to noisy and blurred images, using least-squares minimization. The high-resolution image reconstruction also can be obtained by mapping several low-resolution images onto a single high-resolution image plane, then interpolating it between the nonuniformly spaced samples [3,23]. The high-resolution image reconstruction can also be put into a Bayesian framework by using a Huber-Markov random field; see, for example, Schultz and Stevenson [22].Here we follow the approach in Bose and Boo [1] and consider creating highresolution images of a scene from the low-resolution images of the same scene. When
Abstract. The Mumford-Shah model is one of the most important image segmentation models and has been studied extensively in the last twenty years. In this paper, we propose a two-stage segmentation method based on the Mumford-Shah model. The first stage of our method is to find a smooth solution g to a convex variant of the Mumford-Shah model. Once g is obtained, then in the second stage the segmentation is done by thresholding g into different phases. The thresholds can be given by the users or can be obtained automatically using any clustering methods. Because of the convexity of the model, g can be solved efficiently by techniques like the split-Bregman algorithm or the Chambolle-Pock method. We prove that our method is convergent and that the solution g is always unique. In our method, there is no need to specify the number of segments K (K ≥ 2) before finding g. We can obtain any K-phase segmentations by choosing (K − 1) thresholds after g is found in the first stage, and in the second stage there is no need to recompute g if the thresholds are changed to reveal different segmentation features in the image. Experimental results show that our two-stage method performs better than many standard two-phase or multiphase segmentation methods for very general images, including antimass, tubular, MRI, noisy, and blurry images.
This paper proposes an image statistic for detecting random-valued impulse noise. By this statistic, we can identify most of the noisy pixels in the corrupted images. Combining it with an edge-preserving regularization, we obtain a powerful two-stage method for denoising random-valued impulse noise even for noise level as high as 60%. Simulation results show that our method is significantly better than a number of existing techniques in terms of image restoration and noise detection. Index Termsrandom-valued impulse noise, noise detector, edge-preserving regularization, image denoising.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.