Many applications of histograms for the purposes of image processing are well known. However, applying this process to the transform domain by way of a transform coefficient histogram has not yet been fully explored. This paper proposes three methods of image enhancement: a) logarithmic transform histogram matching, b) logarithmic transform histogram shifting, and c) logarithmic transform histogram shaping using Gaussian distributions. They are based on the properties of the logarithmic transform domain histogram and histogram equalization. The presented algorithms use the fact that the relationship between stimulus and perception is logarithmic and afford a marriage between enhancement qualities and computational efficiency. A human visual system-based quantitative measurement of image contrast improvement is also defined. This helps choose the best parameters and transform for each enhancement. A number of experimental results are presented to illustrate the performance of the proposed algorithms.
This paper presents a new class of the "frequency domain"-based signal/image enhancement algorithms including magnitude reduction, log-magnitude reduction, iterative magnitude and a log-reduction zonal magnitude technique. These algorithms are described and applied for detection and visualization of objects within an image. The new technique is based on the so-called sequency ordered orthogonal transforms, which include the well-known Fourier, Hartley, cosine, and Hadamard transforms, as well as new enhancement parametric operators. A wide range of image characteristics can be obtained from a single transform, by varying the parameters of the operators. We also introduce a quantifying method to measure signal/image enhancement called EME. This helps choose the best parameters and transform for each enhancement. A number of experimental results are presented to illustrate the performance of the proposed algorithms.
Varying scene illumination poses many challenging problems for machine vision systems. One such issue is developing global enhancement methods that work effectively across the varying illumination. In this paper, we introduce two novel image enhancement algorithms: edge-preserving contrast enhancement, which is able to better preserve edge details while enhancing contrast in images with varying illumination, and a novel multihistogram equalization method which utilizes the human visual system (HVS) to segment the image, allowing a fast and efficient correction of nonuniform illumination. We then extend this HVS-based multihistogram equalization approach to create a general enhancement method that can utilize any combination of enhancement algorithms for an improved performance. Additionally, we propose new quantitative measures of image enhancement, called the logarithmic Michelson contrast measure (AME) and the logarithmic AME by entropy. Many image enhancement methods require selection of operating parameters, which are typically chosen using subjective methods, but these new measures allow for automated selection. We present experimental results for these methods and make a comparison against other leading algorithms.
No-reference (NR) image quality assessment is essential in evaluating the performance of image enhancement and retrieval algorithms. Much effort has been made in recent years to develop objective NR grayscale and color image quality metrics that correlate with perceived quality evaluations. Unfortunately, only limited success has been achieved and most existing NR quality assessment is feasible only when prior knowledge about the types of image distortion is available. This paper present: a) a new NR contrast based grayscale image contrast measure: Root Mean Enhancement (RME); b) a NR color RME contrast measure CRME which explores the three dimensional contrast relationships of the RGB color channels; c) a NR color quality measure Color Quality Enhancement (CQE) which is based on the linear combination of colorfulness, sharpness and contrast. Computer simulations demonstrate that each measure has its own advantages: the CRME measure is fast and suitable for real time processing of low contrast images; the CQE measure can be used for a wider variety of distorted images. The effectiveness of the presented measures is demonstrated by using the TID2008 database. Experimental results also show strong correlations between the presented measures and Mean Opinion Score (MOS) 1 . Index Terms -no reference (NR) measures, color contrast measure, color quality measure, Root Mean Enhancement (RME), Color Root Mean Enhancement (CRME), Color Quality Enhancement (CQE)
Cross-modality face recognition is an emerging topic due to the wide-spread usage of different sensors in day-to-day life applications. The development of face recognition systems relies greatly on existing databases for evaluation and obtaining training examples for data-hungry machine learning algorithms. However, currently, there is no publicly available face database that includes more than two modalities for the same subject. In this work, we introduce the Tufts Face Database that includes images acquired in various modalities: photograph images, thermal images, near infrared images, a recorded video, a computerized facial sketch, and 3D images of each volunteer's face. An Institutional Research Board protocol was obtained and images were collected from students, staff, faculty, and their family members at Tufts University. The database includes over 10,000 images from 113 individuals from more than 15 different countries, various gender identities, ages, and ethnic backgrounds. The contributions of this work are: 1) Detailed description of the content and acquisition procedure for images in the Tufts Face Database; 2) The Tufts Face Database is publicly available to researchers worldwide, which will allow assessment and creation of more robust, consistent, and adaptable recognition algorithms; 3) A comprehensive, up-to-date review on face recognition systems and face datasets.
This paper introduces a new unsharp masking (UM) scheme, called nonlinear UM (NLUM), for mammogram enhancement. The NLUM offers users the flexibility 1) to embed different types of filters into the nonlinear filtering operator; 2) to choose different linear or nonlinear operations for the fusion processes that combines the enhanced filtered portion of the mammogram with the original mammogram; and 3) to allow the NLUM parameter selection to be performed manually or by using a quantitative enhancement measure to obtain the optimal enhancement parameters. We also introduce a new enhancement measure approach, called the second-derivative-like measure of enhancement, which is shown to have better performance than other measures in evaluating the visual quality of image enhancement. The comparison and evaluation of enhancement performance demonstrate that the NLUM can improve the disease diagnosis by enhancing the fine details in mammograms with no a priori knowledge of the image contents. The human-visual-system-based image decomposition is used for analysis and visualization of mammogram enhancement.
Image processing technologies such as image enhancement generally utilize linear arithmetic operations to manipulate images. Recently, Jourlin and Pinoli successfully used the logarithmic image processing (LIP) model for several applications of image processing such as image enhancement and segmentation. In this paper, we introduce a parameterized LIP (PLIP) model that spans both the linear arithmetic and LIP operations and all scenarios in between within a single unified model. We also introduce both frequency- and spatial-domain PLIP-based image enhancement methods, including the PLIP Lee's algorithm, PLIP bihistogram equalization, and the PLIP alpha rooting. Computer simulations and comparisons demonstrate that the new PLIP model allows the user to obtain improved enhancement performance by changing only the PLIP parameters, to yield better image fusion results by utilizing the PLIP addition or image multiplication, to represent a larger span of cases than the LIP and linear arithmetic cases by changing parameters, and to utilize and illustrate the logarithmic exponential operation for image fusion and enhancement.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.