Most digital cameras use a color filter array to capture the colors of the scene. Downsampled versions of the red, green, and blue components are acquired, and an interpolation of the three colors is necessary to reconstruct a full representation of the image. This color interpolation is known as demosaicing. The most effective demosaicing techniques proposed in the literature are based on directional filtering and a posteriori decision. In this paper, we present a novel approach to this reconstruction method. A refining step is included to further improve the resulting reconstructed image. The proposed approach requires a limited computational cost and gives good performance even when compared to more demanding techniques.
In this paper, we present a novel technique that uses the optimal linear prediction theory to exploit all the existing redundancies in a color video sequence for lossless compression purposes. The main idea is to introduce the spatial, the spectral, and the temporal correlations in the autocorrelation matrix estimate. In this way, we calculate the cross correlations between adjacent frames and adjacent color components to improve the prediction, i.e., reduce the prediction error energy. The residual image is then coded using a context-based Golomb-Rice coder, where the error modeling is provided by a quantized version of the local prediction error variance. Experimental results show that the proposed algorithm achieves good compression ratios and it is roboust against the scene change problem.
In this paper we present a three-step crosstalk correction algorithm for single sensor still or video cameras provided with a Bayer color filter array. The first step is performed off-line, during the calibration or the development of the camera; it estimates the sensor response to different colors and analyzes the crosstalk. In the second step, the algorithm corrects "on-the-fly" the raw data of the sensor by using the crosstalk model estimated in the first step. The third, optional, step can be included to remove the residual crosstalk from the second step. It consists of a low-pass filter and it can be omitted if other image rescaling steps are included into the image processing chain. The resulting technique has proved to be effective in removing the crosstalk without introducing any other visual artifacts.
In this paper we present a lossless compression algorithm for colour video sequence which exploits the spatial, the spectral and the temporal correlations of a colour video sequence in the RGB colour space using the well-known optimal prediction theory. The main idea is to construct the optimal prediction coefficients estimating an autocorrelation matrix which exploits all these correlations. No colour transformation or motion compensation are applied because reversible colour transformations are not able to fully decorrelate the three bands of each frame, and motion compensation remarkably increases the complexity of the updating step of the autocorrelation matrix estimate. Furthermore, the fact that our algorithm is not based on motion compensation lead it to be robust to scene changes.The prediction errors are then coded using a context-based Golomb-Rice coder, with bias cancellation, but without run-length coding. To construct the contexts, the prediction errors are then modeled using an estimate of their local variance. This estimate considers all the previous prediction errors, using a forgetting factor to improve the adaptability of the proposed algorithm. The quantized estimated variance values are used as contexts for the Golomb-Rice coder, and, among others, we considered the following solutions: -12 contexts estimated by sampling the standard deviation with a quantization step ∆ = 1, and a saturation threshold equal to 12, [σ 12 ]; -128 contexts estimated by sampling the standard deviation with ∆ = 1/3 and a saturation threshold equal to 128/3, [σ 128 ].The following table reports the obtained coding results for the proposed algorithm compared to JPEG-LS (without using any colour transformation), and JPEG2000 (in lossless mode, using the reversible colour Y D b D r transform, and the 5/3 DWT). The results show an improvement of about 1.5 bpp and 0.65 bpp if compared with JPEG-LS coder, and JPEG2000, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.