In traditional sensorless control of the interior permanent magnet synchronous motors(IPMSMs) for medium and high speed domains, a control strategy based on a sliding-mode observer(SMO) and phase-locked loop (PLL) is widely applied. A new strategy for IPMSM sensorless controlbased on an adaptive super-twisting sliding-mode observer and improved phase-locked loop isproposed in this paper. A super-twisting sliding-mode observer (STO) can eliminate the chatteringproblem without low-pass filters (LPFs), which is an effective method to obtain the estimated backelectromotive forces (EMFs). However, the constant sliding-mode gains in STO may causeinstability in the high speed domain and chattering in the low speed domain. The speed-relatedadaptive gains are proposed to achieve the accurate estimation of the observer in wide speed rangeand the corresponding stability is proved. When the speed of IPMSM is reversed, the traditionalPLL will lose its accuracy, resulting in a position estimation error of 180°. The improved PLL basedon a simple strategy for signal reconstruction of back EMF is proposed to ensure that the motor canrealize the direction switching of speed stably. The proposed strategy is verified by experimentaltesting with a 60-kW IPMSM sensorless drive.
Rice bacterial leaf streak (BLS) is a serious disease in rice leaves and can seriously affect the quality and quantity of rice growth. Automatic estimation of disease severity is a crucial requirement in agricultural production. To address this, a new method (termed BLSNet) was proposed for rice and BLS leaf lesion recognition and segmentation based on a UNet network in semantic segmentation. An attention mechanism and multi-scale extraction integration were used in BLSNet to improve the accuracy of lesion segmentation. We compared the performance of the proposed network with that of DeepLabv3+ and UNet as benchmark models used in semantic segmentation. It was found that the proposed BLSNet model demonstrated higher segmentation and class accuracy. A preliminary investigation of BLS disease severity estimation was carried out based on our BLS segmentation results, and it was found that the proposed BLSNet method has strong potential to be a reliable automatic estimator of BLS disease severity.
The conventional method for crop insect detection based on visual judgment of the field is time-consuming, laborious, subjective, and error prone. The early detection and accurate localization of agricultural insect pests can significantly improve the effectiveness of pest control as well as reduce the costs, which has become an urgent demand for crop production. Maize Spodoptera frugiperda is a migratory agricultural pest that has severely decreased the yield of maize, rice, and other kinds of crops worldwide. To monitor the occurrences of maize Spodoptera frugiperda in a timely manner, an end-to-end Spodoptera frugiperda detection model termed the Pest Region-CNN (Pest R-CNN) was proposed based on the Faster Region-CNN (Faster R-CNN) model. Pest R-CNN was carried out according to the feeding traces of maize leaves by Spodoptera frugiperda. The proposed model was trained and validated using high-spatial-resolution red–green–blue (RGB) ortho-images acquired by an unmanned aerial vehicle (UAV). On the basis of the severity of feeding, the degree of Spodoptera frugiperda invasion severity was classified into the four classes of juvenile, minor, moderate, and severe. The degree of severity and specific feed location of S. frugiperda infestation can be determined and depicted in the frame forms using the proposed model. A mean average precision (mAP) of 43.6% was achieved by the proposed model on the test dataset, showing the great potential of deep learning object detection in pest monitoring. Compared with the Faster R-CNN and YOLOv5 model, the detection accuracy of the proposed model increased by 12% and 19%, respectively. Further ablation studies showed the effectives of channel and spatial attention, group convolution, deformable convolution, and the multi-scale aggregation strategy in the aspect of improving the accuracy of detection. The design methods of the object detection architecture could provide reference for other research. This is the first step in applying deep-learning object detection to S. frugiperda feeding trace, enabling the application of high-spatial-resolution RGB images obtained by UAVs to S. frugiperda-infested object detection. The proposed model will be beneficial with respect to S. frugiperda pest stress monitoring to realize precision pest control.
The leaf area index (LAI) is of great significance for crop growth monitoring. Recently, unmanned aerial systems (UASs) have experienced rapid development and can provide critical data support for crop LAI monitoring. This study investigates the effects of combining spectral and texture features extracted from UAS multispectral imagery on maize LAI estimation. Multispectral images and in situ maize LAI were collected from test sites in Tongshan, Xuzhou, Jiangsu Province, China. The spectral and texture features of UAS multispectral remote sensing images are extracted using the vegetation indices (VIs) and the gray-level co-occurrence matrix (GLCM), respectively. Normalized texture indices (NDTIs), ratio texture indices (RTIs), and difference texture indices (DTIs) are calculated using two GLCM-based textures to express the influence of two different texture features on LAI monitoring at the same time. The remote sensing features are prescreened through correlation analysis. Different data dimensionality reduction or feature selection methods, including stepwise selection (ST), principal component analysis (PCA), and ST combined with PCA (ST_PCA), are coupled with support vector regression (SVR), random forest (RF), and multiple linear regression (MLR) to build the maize LAI estimation models. The results reveal that ST_PCA coupled with SVR has better performance, in terms of the VIs + DTIs (R2 = 0.876, RMSE = 0.239) and VIs + NDTIs (R2 = 0.877, RMSE = 0.236). This study introduces the potential of different texture indices for maize LAI monitoring and demonstrates the promising solution of using ST_PCA to realize the combining of spectral and texture features for improving the estimation accuracy of maize LAI.
Accurate prediction of food crop yield is of great significance for global food security and regional trade stability. Since remote sensing data collected from unmanned aerial vehicle (UAV) platforms have the features of flexibility and high resolution, these data can be used as samples to develop regional regression models for accurate prediction of crop yield at a field scale. The primary objective of this study was to construct regional prediction models for winter wheat yield based on multi-spectral UAV data and machine learning methods. Six machine learning methods including Gaussian process regression (GPR), support vector machine regression (SVR) and random forest regression (RFR) were used for the construction of the yield prediction models. Ten vegetation indices (VIs) extracted from canopy spectral images of winter wheat acquired from a multi-spectral UAV at five key growth stages in Xuzhou City, Jiangsu Province, China in 2021 were selected as the variables of the models. In addition, in situ measurements of wheat yield were obtained in a destructive sampling manner for prediction algorithm modeling and validation. Prediction results of single growth stages showed that the optimal model was GPR constructed from extremely strong correlated VIs (ESCVIs) at the filling stage (R2 = 0.87, RMSE = 49.22 g/m2, MAE = 42.74 g/m2). The results of multiple stages showed GPR achieved the highest accuracy (R2 = 0.88, RMSE = 49.18 g/m2, MAE = 42.57 g/m2) when the ESCVIs of the flowering and filling stages were used. Larger sampling plots were adopted to verify the accuracy of yield prediction; the results indicated that the GPR model has strong adaptability at different scales. These findings suggest that using machine learning methods and multi-spectral UAV data can accurately predict crop yield at the field scale and deliver a valuable application reference for farm-scale field crop management.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.