Make and model recognition (MMR) of vehicles plays an important role in automatic vision-based systems. This paper proposes a novel deep learning approach for MMR using the SqueezeNet architecture. The frontal views of vehicle images are first extracted and fed into a deep network for training and testing. The SqueezeNet architecture with bypass connections between the Fire modules, a variant of the vanilla SqueezeNet, is employed for this study, which makes our MMR system more efficient. The experimental results on our collected large-scale vehicle datasets indicate that the proposed model achieves 96.3% recognition rate at the rank-1 level with an economical time slice of 108.8 ms. For inference tasks, the deployed deep model requires less than 5 MB of space and thus has a great viability in real-time applications.
The purpose of remote sensing image fusion is to sharpen a low spatial resolution multispectral (MS) image by injecting the detail map extracted from a panchromatic (PAN) image. In this paper, a novel remote sensing image fusion method based on adaptive intensity-hue-saturation (IHS) and multiscale guided filter is presented. In the proposed method, the intensity component is obtained adaptively from the upsampled MS image at first. Different from traditional IHS-based methods, we subsequently propose a multiscale guided filter strategy to filter the PAN image to achieve more detail information. Finally, the total detail map is injected into each band of the upsampled MS image to obtain the fused image by a model-based algorithm, in which an improved injection gains approach is proposed to control the quantity of the injected detail information. Experimental results demonstrated that the proposed method can provide more spatial information and preserve more spectral information compared with several state-of-the-art fusion methods in both subjective and objective evaluations.
INDEX TERMSImage fusion, multispectral (MS) image, panchromatic (PAN) image, intensity-hue-saturation (IHS) transform, guided filter.
Pan-sharpening aims to sharpen a low spatial resolution multispectral (MS) image by combining the spatial detail information extracted from a panchromatic (PAN) image. An effective pan-sharpening method should produce a high spatial resolution MS image while preserving more spectral information. Unlike traditional intensity-hue-saturation (IHS)-and principal component analysis (PCA)-based multiscale transform methods, a novel pan-sharpening framework based on the matting model (MM) and multiscale transform is presented in this paper. First, we use the intensity component (I) of the MS image as the alpha channel to generate the spectral foreground and background. Then, an appropriate multiscale transform is utilized to fuse the PAN image and the upsampled I component to obtain the fused high-resolution gray image. In the fusion, two preeminent fusion rules are proposed to fuse the low-and high-frequency coefficients in the transform domain. Finally, the high-resolution sharpened MS image is obtained by linearly compositing the fused gray image with the upsampled foreground and background images. The proposed framework is the first work in the pan-sharpening field. A large number of experiments were tested on various satellite datasets; the subjective visual and objective evaluation results indicate that the proposed method performs better than the IHS-and PCA-based frameworks, as well as other state-of-the-art pan-sharpening methods both in terms of spatial quality and spectral maintenance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.