The stripe noise effects severely degrade the image quality in infrared imaging systems. The existing destriping algorithms still struggle to balance noise suppression, detail preservation, and real-time performance, which retards their application in spectral imaging and signal processing field. To solve this problem, an innovative wavelet deep neural network from the perspective of transform domain is presented in this paper, which takes the intrinsic characteristics of stripe noise and complementary information between the coefficients of different wavelet sub-bands into full consideration to accurately estimate the noise with the lower computational load. In addition, a special directional regularizer is further defined to separate the scene details from stripe noise more thoroughly and recover the details more accurately. The extensive experiments on simulated and real data demonstrate that our proposed method outperforms several classical destriping methods on both quantitative and qualitative assessments. INDEX TERMS Neural networks, image denoising, infrared image sensors, wavelet transforms.
Existing fixed pattern noise reduction (FPNR) methods are easily affected by the motion state of the scene and working condition of the image sensor, which leads to over smooth effects, ghosting artifacts as well as slow convergence rate. To address these issues, we design an innovative cascade convolution neural network (CNN) model with residual skip connections to realize single frame blind FPNR operation without any parameter tuning. Moreover, a coarse-fine convolution (CF-Conv) unit is introduced to extract complementary features in various scales and fuse them to pick more spatial information. Inspired by the success of the visual attention mechanism, we further propose a particular spatial-channel noise attention unit (SCNAU) to separate the scene details from fixed pattern noise more thoroughly and recover the real scene more accurately.Experimental results on test data demonstrate that the proposed cascade CNN-FPNR method outperforms the existing FPNR methods in both of visual effect and quantitative assessment. affected by the fixed pattern noise (FPN), which is mainly caused by the spatial non-uniform response of individual detectors in the sensor [6][7]. More seriously, spatial FPN generally drifts with time, which makes the problem be more challenging [8][9][10][11]. As a result, the FPN causes a significant decline in imaging quality and decreases the precision for object detection and recognition. To meet this challenge, the cost-effective fixed pattern noise reduction (FPNR) techniques based on signal processing are continually investigated and applied in nearly all the infrared imaging systems.Existing FPNR algorithms are mainly divided into two primary categories: reference-based FPNR (RB-FPNR) and scene-based FPNR (SB-FPNR) [12][13][14]. The RB-FPNR methods remove the FPN according to fixed calibration parameters calculated from the response of blackbody radiation at different temperatures [15]. Unfortunately, such a calibration requires the camera to halt the normal operation and update the calibration parameters due to the inherent temporal drift of detector characteristics [16]. Given this fact, most of the recent researches have focused on developing SB-FPNR methods, such as neural networks (NN) [17], temporal high-pass filter (THPF) [18,19] and constant-statistics (CS) method [20][21]. As for SB-FPNR algorithms, the calibration parameters are iteratively updated by utilizing the information extracted from inter-frame motion, therefore, ghosting artifacts and over smooth effects resulted from the sudden deceleration of scene motion often seriously degrade the noise reduction performance, moreover, the relatively slow convergence process occurred in scene switching is unacceptable for most of the practical applications.In recent years, convolution neural network (CNN) [22] models were explored deeply and applied in various image processing tasks [23], such as image super resolution [24,25], image denoising [26], and sketch synthesis [27][28][29]. To the best of our knowledge, CNN based FPNR m...
To realize the multi-focus image fusion task, an end-to-end deep convolutional neural network (DCNN) model that produces the final fused image directly from the source images is presented in this paper. In order to promote the fusion precision, the innovative multi-focus fusion DCNN introduces a multi-scale feature extraction (MFE) unit to collect more complementary features from different spatial scales and fuse them to excavate more spatial information. Moreover, a visual attention unit is designed to help the network locate the focused region more accurately and pick more useful features for perfectly splicing the details in the fusion process. Experimental results illustrate that the proposed method is superior to several existing multi-focus image fusion methods in both of the subjective visual effects and objective quality metrics. INDEX TERMS Image fusion, multi-focus, convolution neural network, multi-scale.
Based on the S-curve model of the detector response of infrared focal plan arrays (IRFPAs), an improved two-point correction algorithm is presented. The algorithm first transforms the nonlinear image data into linear data and then uses the normal two-point algorithm to correct the linear data. The algorithm can effectively overcome the influence of nonlinearity of the detector's response, and it enlarges the correction precision and the dynamic range of the response. A real-time imaging-signal-processing system for IRFPAs that is based on a digital signal processor and field-programmable gate arrays is also presented. The nonuniformity correction capability of the presented solution is validated by experimental imaging procedures of a 128 x 128 pixel IRFPA camera prototype.
The residual nonuniformity response, ghosting artifacts, and over-smooth effects are the main defects of the existing nonuniformity correction (NUC) methods. In this paper, a spatiotemporal featurebased adaptive NUC algorithm with bilateral total variation (BTV) regularization is presented. The primary contributions of the innovative method are embodied in the following aspects: BTV regularizer is introduced to eliminate the nonuniformity response and suppress the ghosting effects. The spatiotemporal adaptive learning rate is presented to further accelerate convergence, remove ghosting artifacts, and avoid oversmooth. Moreover, the random projection-based bilateral filter is proposed to estimate the desired target image more accurately which yields more details in the actual scene. The experimental results validated that the proposed algorithm achieves outstanding performance upon both simulated data and real-world sequence.INDEX TERMS Infrared imaging, neural networks, image denoising, infrared image sensors.
Many existing scene-adaptive nonuniformity correction (NUC) methods suffer from slow convergence rate together with ghosting effects. In this paper, an improved NUC algorithm based on total variation penalized neural network regression is presented. Our work mainly focuses on solving the overfitting problem in least mean square (LMS) regression of traditional neural network NUC methods, which is realized by employing a total variation penalty in the cost function and redesigning the processing architecture. Moreover, an adaptive gated learning rate is presented to further reduce the ghosting artifacts and guarantee fast convergence. The performance of the proposed algorithm is comprehensively investigated with artificially corrupted test sequences and real infrared image sequences, respectively. Experimental results show that the proposed algorithm can effectively accelerate the convergence speed, suppress ghosting artifacts, and promote correction precision.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.