In planetary science, it is an important basic work to recognize and classify the features of topography and geomorphology from the massive data of planetary remote sensing.Therefore, this paper proposes a lightweight model based on VGG-16, which can selectively extract some features of remote sensing images, remove redundant information, and recognize and classify remote sensing images. This model not only ensures the accuracy, but also reduces the parameters of the model.According to our experimental results, our model has a great improvement in remote sensing image classification, from the original accuracy of 85% to 98% now. At the same time, the model has a great improvement in convergence speed and classification performance.By inputting the remote sensing image data of ultralow pixels (64 * 64) into our model, we prove that our model still has a high accuracy rate of 95% for the remote sensing image with ultra-low pixels and less feature points.Therefore, the model has a good application prospect in remote sensing image fine classification, very low pixel, less image classification.
In this study, tunable diode laser absorption spectroscopy (TDLAS) combined with wavelength modulation spectroscopy (WMS) was used to develop a trace C2H2 sensor based on the principle of gas absorption spectroscopy. The core of this sensor is an interband cascade laser that releases wavelength locks to the best absorption line of C2H2 at 3305 cm−1 (3026 nm) using a driving current and a working temperature control. As the detected result was influenced by 1/f noise caused by the laser or external environmental factors, the TDLAS-WMS technology was used to suppress the 1/f noise effectively, to obtain a better minimum detection limit (MDL) performance. The experimental results using C2H2 gas with five different concentrations show a good linear relationship between the peak value of the second harmonic signal and the gas concentration, with a linearity of 0.9987 and detection accuracy of 0.4%. In total, 1 ppmv of C2H2 gas sample was used for a 2 h observation experiment. The data show that the MDL is low as 1 ppbv at an integration time of 63 s. In addition, the sensor can be realized by changing the wavelength of the laser to detect a variety of gases, which shows the flexibility and practicability of the proposed sensor.
The accurate and rapid acquisition of crop and weed information is an important prerequisite for automated weeding operations. This paper proposes the application of a network model based on Faster R-CNN for weed identification in images of cropping areas. The feature pyramid network (FPN) algorithm is integrated into the Faster R-CNN network to improve recognition accuracy. The Faster R-CNN deep learning network model is used to share convolution features, and the ResNeXt network is fused with FPN for feature extractions. Tests using >3000 images for training and >1000 images for testing demonstrate a recognition accuracy of >95%. The proposed method can effectively detect weeds in images with complex backgrounds taken in the field, thereby facilitating accurate automated weed control systems.
Individual cow identification is a prerequisite for intelligent dairy farming management, and is important for achieving accurate and informative dairy farming. Computer vision-based approaches are widely considered because of their non-contact and practical advantages. In this study, a method based on the combination of Ghost and attention mechanism is proposed to improve ReNet50 to achieve non-contact individual recognition of cows. In the model, coarse-grained features of cows are extracted using a large sensory field of cavity convolution, while reducing the number of model parameters to some extent. ResNet50 consists of two Bottlenecks with different structures, and a plug-and-play Ghost module is inserted between the two Bottlenecks to reduce the number of parameters and computation of the model using common linear operations without reducing the feature map. In addition, the convolutional block attention module (CBAM) is introduced after each stage of the model to help the model to give different weights to each part of the input and extract the more critical and important information. In our experiments, a total of 13 cows’ side view images were collected to train the model, and the final recognition accuracy of the model was 98.58%, which was 4.8 percentage points better than the recognition accuracy of the original ResNet50, the number of model parameters was reduced by 24.85 times, and the model size was only 3.61 MB. In addition, to verify the validity of the model, it is compared with other networks and the results show that our model has good robustness. This research overcomes the shortcomings of traditional recognition methods that require human extraction of features, and provides theoretical references for further animal recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.