A novel approach for shape preserving contrast enhancement is presented in this paper. Contrast enhancement is achieved by means of a local histogram equalization algorithm which preserves the level-sets of the image. This basic property is violated by common local schemes, thereby introducing spurious objects and modifying the image information. The scheme is based on equalizing the histogram in all the connected components of the image, which are defined based both on the grey-values and spatial relations between pixels in the image, and following mathematical morphology, constitute the basic objects in the scene. We give examples for both grey-value and color images.
A novel image sequence denoising algorithm is presented. The proposed approach takes advantage of the self-similarity and redundancy of adjacent frames. The algorithm is inspired by fusion algorithms, and as the number of frames increases, it tends to a pure temporal average. The use of motion compensation by regularized optical flow methods permits robust patch comparison in a spatiotemporal volume. The use of principal component analysis ensures the correct preservation of fine texture and details. An extensive comparison with the state-of-the-art methods illustrates the superior performance of the proposed approach, with improved texture and detail reconstruction.
In this work, we propose a method to segment a 1-D histogram without a priori assumptions about the underlying density function. Our approach considers a rigorous definition of an admissible segmentation, avoiding over and under segmentation problems. A fast algorithm leading to such a segmentation is proposed. The approach is tested both with synthetic and real data. An application to the segmentation of written documents is also presented. We shall see that this application requires the detection of very small histogram modes, which can be accurately detected with the proposed method.
In this paper we present the simplest possible color balance algorithm. The assumption underlying this algorithm is that the highest values of R, G, B observed in the image must correspond to white, and the lowest values to obscurity. The algorithm simply stretches, as much as it can, the values of the three channels Red, Green, Blue (R, G, B), so that they occupy the maximal possible range [0, 255] by applying an affine transform ax+b to each channel. Since many images contain a few aberrant pixels that already occupy the 0 and 255 values, the proposed method saturates a small percentage of the pixels with the highest values to 255 and a small percentage of the pixels with the lowest values to 0, before applying the affine transform.
One of the aims of computer vision in the past 30 years has been to recognize shapes by numerical algorithms. Now, what are the geometric features on which shape recognition can be based? In this paper, we review the mathematical arguments leading to a unique definition of planar shape elements. This definition is derived from the invariance requirement to not less than five classes of perturbations, namely noise, affine distortion, contrast changes, occlusion, and background. This leads to a single possibility: shape elements as the normalized, affine smoothed pieces of level lines of the image. As a main possible application, we show the existence of a generic image comparison technique able to find all shape elements common to two images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.