In this paper we present a novel iterative procedure for multichannel image and data reconstruction using Bregman distances. The motivation for our approach is that in many applications multiple channels share a common subgradient with respect to a suitable regularization. This implies desirable properties such as a common edge set (and a common direction of the normals to the level lines) in the case of the total variation (TV). Therefore, we propose to determine each iterate by regularizing each channel with a weighted linear combination of Bregman distances to all other image channels from the previous iteration. In this sense we generalize the Bregman iteration proposed by Osher et al. in [Multiscale Model. Simul., 4 (2005), pp. 460-489] to multichannel images. We prove the convergence of the proposed scheme, analyze stationary points, and present numerical experiments on color image denoising, which show the superior behavior of our approach in comparison to TV, TV with Bregman iterations on each channel separately, and vectorial TV. Further numerical experiments include image deblurring and image inpainting. Additionally, we propose using the infimal convolution of Bregman distances to different channels from the previous iteration to obtain the independence of the sign and hence the independence of the direction of the edge. While this work focuses on TV regularization, the proposed scheme can potentially improve any variational multichannel reconstruction method with a one-homogeneous regularization.
Abstract-The development and tuning of denoising algorithms is usually based on readily processed test images that are artificially degraded with additive white Gaussian noise (AWGN). While AWGN allows us to easily generate test data in a repeatable manner, it does not reflect the noise characteristics in a real digital camera. Realistic camera noise is signal-dependent and spatially correlated due to the demosaicking step required to obtain full-color images. Hence, the noise characteristic is fundamentally different from AWGN. Using such unrealistic data to test, optimize and compare denoising algorithms may lead to incorrect parameter tuning or suboptimal choices in research on denoising algorithms.In this paper, we therefore propose an approach to evaluate denoising algorithms with respect to realistic camera noise: we describe a new camera noise model that includes the full processing chain of a single sensor camera. We determine the visual quality of noisy and denoised test sequences using a subjective test with 18 participants. We show that the noise characteristics have a significant effect on visual quality. Quality metrics, which are required to compare denoising results, are applied, and we evaluate the performance of 10 fullreference metrics and one no-reference metric with our realistic test data. We conclude that a more realistic noise model should be used in future research to improve the quality estimation of digital images and videos and to improve the research on denoising algorithms.
This paper proposes a novel strategy for depth video denoising in RGBD camera systems. Depth map sequences obtained by state-of-the-art Time-of-Flight sensors suffer from high temporal noise. Hence, all high-level RGB video renderings based on the accompanied depth maps' 3D geometry like augmented reality applications will have severe temporal flickering artifacts. The authors approached this limitation by decoupling depth map upscaling from the temporal denoising step. Thereby, denoising is processed on raw pixels including uncorrelated pixel-wise noise distributions. The authors' denoising methodology utilizes joint sparse 3D transform-domain collaborative filtering. Therein, they extract RGB texture information to yield a more stable and accurate highly sparse 3D depth block representation for the consecutive shrinkage operation. They show the effectiveness of our method on real RGBD camera data and on a publicly available synthetic data set. The evaluation reveals that the authors' method is superior to state-of-the-art methods. Their method delivers flicker-free depth video streams for future applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.