Multifocus image fusion is the merging of images of the same scene and having multiple different foci into one all-focus image. Most existing fusion algorithms extract high-frequency information by designing local filters and then adopt different fusion rules to obtain the fused images. In this paper, a wavelet is used for multiscale decomposition of the source and fusion images to obtain high-frequency and low-frequency images. To obtain clearer and complete fusion images, this paper uses a deep convolutional neural network to learn the direct mapping between the high-frequency and low-frequency images of the source and fusion images. In this paper, high-frequency and low-frequency images are used to train two convolutional networks to encode the high-frequency and low-frequency images of the source and fusion images. The experimental results show that the method proposed in this paper can obtain a satisfactory fusion image, which is superior to that obtained by some advanced image fusion algorithms in terms of both visual and objective evaluations.
Automatic crack detection is challenging due to the poor continuity of cracks, the different widths of cracks, and the low contrast between cracks and the surrounding pavement. In this paper, a deep convolutional neural network called CurSeg is proposed, which achieves pixelwise segmentation of cracks in an end-to-end manner. In this approach, features at different scales are fused together to attain the context information from the cracks. The elaborately designed model can effectively suppress the propagation of noise and further refine the crack features by aggregating multiscale and multilevel features from low-level to high-level. Residual detail attention (RDA) is also introduced to better capture the line structure and the ability to accurately locate the crack position in a complex context to make the network more discriminative and robust. CurSeg is evaluated on four datasets to validate the effectiveness of the approach. The experimental results demonstrate that this method achieves state-of-the-art performance on the four challenging datasets.
Remote sensing satellites can provide a large number of multispectral images. However, due to the limitations of optical sensors embedded in satellites, the spatial resolution of multispectral images is relatively low. Pansharpening aims to combine high-resolution panchromatic and multi-spectral images to generate high-resolution multi-spectral images. In this paper, we propose a pansharpening method based on a component substitution framework. We use fractional-order differential operators and guided filter to balance the spectral distortion and spatial information loss that occur when remote sensing image fusion. Fractional-order differentiation can better define the detailed map, and the guided filter can enhance the spectral information of the detailed map. Experiments show that the proposed method in this paper can better combine the spectral information and spatial information, as well as obtain satisfactory results in both subjective visual perception and objective object evaluation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.