Aiming at the problems of incomplete dehazing of a single image and unnaturalness of the restored image, a multi‐scale single‐image defogging network with local features fused with global features is proposed, using fog and non‐fogging image pairs train the network in a direct end‐to‐end manner. The network is divided into global feature extraction module, multi‐scale feature extraction module and deep fusion module. The global feature extraction module extracts global features that characterize the contour; multi‐scale feature extraction module extracts features at different scales to improve learning accuracy; in the deep fusion module, the convolutional layer extracts the local features that describe the image content, and then the local features and the global features are merged through skip connections. Comparative experiments were carried out on artificially synthesized fog images and real fog images. The experimental results show that the algorithm proposed here can achieve the ideal dehazing effect, and is superior to other comparison algorithms in subjective and objective aspects.
Existing vehicle detection has the problem of unbalanced detection accuracy and speed. Aiming at this problem, this paper proposes a new real-time vehicle detection model named YOLOv3 Tiny Vehicle. The proposed network replaces the Maxpooling layers of the original network with the convolutional layers to ensure that the characteristic information of the vehicle was preserved to the greatest extent. On this basis, our work adds a dense connection structure to the original network, which greatly reduces or even eliminates the overfitting problem during network training. The experimental results show that the mean Average Precision (mAP) of the model on the Beijing Institute of Technology vehicle (BIT-Vehicle) dataset can reach 96.80%, the Frames per second (FPS) can reach 188. At the same time, it also shows that our model has preeminent generalization ability.
Color quantization is used to obtain an image with the same number of pixels as the original but represented using fewer colors. Most existing color quantization algorithms are based on the Red Green Blue (RGB) color space, and there are few color quantization algorithms for the Hue Saturation Intensity (HSI) color space with a simple uniform quantization algorithm. In this paper, we propose a dichotomy color quantization algorithm for the HSI color space. The proposed color quantization algorithm can display images with a smaller number of colors than other quantization methods of RGB color space. The proposed algorithm has three main steps as follows: first, a single-valued monotonic function of the Hue (H) component in the from RGB color space to HSI color space (RGB-HSI) color space conversion is constructed, which can avoid the partition calculation of the H component in the RGB-HSI color space; second, an iterative quantization algorithm based on the single-valued monotonic function is proposed; and third, a dichotomy quantization algorithm is proposed to improve the iterative quantization algorithm. Both visual and numerical evaluations reveal that the proposed method presents promising quantization results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.