In this paper, a novel fractional-order fusion model (FFM) is presented for low-light image enhancement. Existing image enhancement methods don’t adequately extract contents from low-light areas, suppress noise, and preserve naturalness. To solve these problems, the main contributions of this paper are using fractional-order mask and the fusion framework to enhance the low-light image. Firstly, the fractional mask is utilized to extract illumination from the input image. Secondly, image exposure adjusts to visible the dark regions. Finally, the fusion approach adopts the extracting of more hidden contents from dim areas. Depending on the experimental results, the fractional-order differential is much better for preserving the visual appearance as compared to traditional integer-order methods. The FFM works well for images having complex or normal low-light conditions. It also shows a trade-off among contrast improvement, detail enhancement, and preservation of the natural feel of the image. Experimental results reveal that the proposed model achieves promising results, and extracts more invisible contents in dark areas. The qualitative and quantitative comparison of several recent and advance state-of-the-art algorithms shows that the proposed model is robust and efficient.
Mitigation measures and control strategies relating to the novel coronavirus disease 2019 (COVID-19) have been widely applied in many countries to reduce the transmission of this pandemic disease. China was the first country to implement a strong lockdown policy to control COVID-19 when countries worldwide were struggling to manage COVID-19 cases. However, lockdown causes numerous changes to air-quality patterns due to the low amount of traffic and the decreased human mobility it results in. To study the impact of the strict control measures of the new COVID-19 epidemic on the air quality of Hubei in early 2020, the air-quality monitoring data of Hubei’s four cities, namely Huangshi, Yichang, Jingzhou, and Wuhan, from 2019 to 2021, specifically 1 January to 30 August, was examined to analyze the characteristics of the temporal and spatial distribution. All air-quality pollutants decreased during the active-COVID-19 period, with a maximum decrease of 26% observed in PM10, followed by 23% of PM2.5, and a minimum decrease of 5% observed in O3. Changes in air pollutants from 2017 to 2021 were also compared, and a decrease in all pollutants through to 2020 was found. The air-quality index (AQI) recorded an increase of 2% post-COVID-19, which shows that air quality will worsen in future, but it decreased by 22% during the active-COVID-19 period. A path analysis model was developed to further understand the relationship between the AQI and air-quality patterns. This path analysis shows a strong correlation between the AQI and PM10 and PM2.5, however its correlation with other air pollutants is weak. Regression analysis shows a similar pattern of there being a strong relationship between AQI and PM10 (r2 = 0.97) and PM2.5 (r2 = 0.93). Although the COVID-19 pandemic had numerous negative effects on human health and the global economy, it is likely that the reduction in air pollution and the significant improvement in ambient air quality due to lockdowns provided substantial short-term health benefits. The government must implement policies to control the environmental issues which are causing poor air quality in post-COVID-19.
Images are an important medium to represent meaningful information. It may be difficult for computer vision techniques and humans to extract valuable information from images with low illumination. Currently, the enhancement of low-quality images is a challenging task in the domain of image processing and computer graphics. Although there are many algorithms for image enhancement, the existing techniques often produce defective results with respect to the portions of the image with intense or normal illumination, and such techniques also inevitably degrade certain visual artifacts of the image. The model use for image enhancement must perform the following tasks: preserving details, improving contrast, color correction, and noise suppression. In this paper, we have proposed a framework based on a camera response and weighted least squares strategies. First, the image exposure is adjusted using brightness transformation to obtain the correct model for the camera response, and an illumination estimation approach is used to extract a ratio map. Then, the proposed model adjusts every pixel according to the calculated exposure map and Retinex theory. Additionally, a dehazing algorithm is used to remove haze and improve the contrast of the image. The color constancy parameters set the true color for images of low to average quality. Finally, a details enhancement approach preserves the naturalness and extracts more details to enhance the visual quality of the image. The experimental evidence and a comparison with several, recent state-of-the-art algorithms demonstrated that our designed framework is effective and can efficiently enhance low-light images.
Building detection in satellite images has been considered an essential field of research in remote sensing and computer vision. There are currently numerous techniques and algorithms used to achieve building detection performance. Different algorithms have been proposed to extract building objects from high-resolution satellite images with standard contrast. However, building detection from low-contrast satellite images to predict symmetrical findings as of past studies using normal contrast images is considered a challenging task and may play an integral role in a wide range of applications. Having received significant attention in recent years, this manuscript proposes a methodology to detect buildings from low-contrast satellite images. In an effort to enhance visualization of satellite images, in this study, first, the contrast of an image is optimized to represent all the information using singular value decomposition (SVD) based on the discrete wavelet transform (DWT). Second, a line-segment detection scheme is applied to accurately detect building line segments. Third, the detected line segments are hierarchically grouped to recognize the relationship of identified line segments, and the complete contours of the building are attained to obtain candidate rectangular buildings. In this paper, the results from the method above are compared with existing approaches based on high-resolution images with reasonable contrast. The proposed method achieves high performance thus yields more diversified and insightful results over conventional techniques.
Images captured under varying light conditions have deficient contrast, low brightness, latent colors, and high noise. Numerous methods have been developed for image enhancement. However, these methods are only suitable for enhancing specific type of images (e.g., overexposed or underexposed), and also fail to restore artifact-free results for various other types of images. Therefore, to meet this goal, in this paper, we present an automatic image enhancement method, which is capable of producing quality results for all types of images captured under uneven exposure conditions (e.g., backlit, non-uniform, overexposed , one-sided illumination and night-time images). Firstly, images are categorized using a convolutional neural network (CNN) to determine their class, and different values of weight coefficients are achieved for further processing. Then, images are converted into photonegative form to obtain an initial transmission map using a bright channel prior. Next, L1-norm regularization is adopted to refine scene transmission. Besides, environmental light is estimated based on an effective filter. Finally, the image degradation model is applied to achieve enhanced results. Furthermore, post-processing of the images is comprised of two steps, such as denoising and details enhancement. The denoised model is only applied when the images are captured in extreme low-light conditions. Whereas, a smooth layer is obtained using L1-norm regularization to enhance details in partially over-and underexposed images. Extensive experiments reveal the effectives of the proposed approach as compared to other state-of-the-art algorithms. INDEX TERMS Exposure correction, low-light conditions, details enhancement, image degradation model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.