Because bacterial blight (BB) disease seriously affects the yield and quality of rice, breeding BB resistant rice is an important priority for plant breeders but the process is time-consuming. The feasibility of using terahertz imaging technology and near-infrared hyperspectral imaging technology to identify BB resistant seeds has therefore been studied. The two-dimensional (2D) spectral images and one-dimensional (1D) spectra provided by both imaging methods were used to build discriminant models based on a deep learning method, the convolutional neural network (CNN), and traditional machine learning methods, support vector machine (SVM), random forest (RF), and partial least squares discriminant analysis (PLS-DA). The highest classification accuracy was achieved by the discriminate model based on CNN using the terahertz absorption spectra. Confusion matrixes were pictured to show the identification details. The t-distributed stochastic neighbor embedding (t-SNE) method was used to visualize the process of CNN data processing. Terahertz imaging technology combined with CNN has great potential to quickly identify BB resistant rice seeds and is more accurate than using near-infrared hyperspectral imaging.
The feasibility of using the fourier transform infrared (FTIR) spectroscopic technique with a stacked sparse auto-encoder (SSAE) to identify orchid varieties was studied. Spectral data of 13 orchids varieties covering the spectral range of 4000–550 cm−1 were acquired to establish discriminant models and to select optimal spectral variables. K nearest neighbors (KNN), support vector machine (SVM), and SSAE models were built using full spectra. The SSAE model performed better than the KNN and SVM models and obtained a classification accuracy 99.4% in the calibration set and 97.9% in the prediction set. Then, three algorithms, principal component analysis loading (PCA-loading), competitive adaptive reweighted sampling (CARS), and stacked sparse auto-encoder guided backward (SSAE-GB), were used to select 39, 300, and 38 optimal wavenumbers, respectively. The KNN and SVM models were built based on optimal wavenumbers. Most of the optimal wavenumbers-based models performed slightly better than the all wavenumbers-based models. The performance of the SSAE-GB was better than the other two from the perspective of the accuracy of the discriminant models and the number of optimal wavenumbers. The results of this study showed that the FTIR spectroscopic technique combined with the SSAE algorithm could be adopted in the identification of the orchid varieties.
Near-infrared (874–1734 nm) hyperspectral imaging technology combined with chemometrics was used to identify parental and hybrid okra seeds. A total of 1740 okra seeds of three different varieties, which contained the male parent xiaolusi, the female parent xianzhi, and the hybrid seed penzai, were collected, and all of the samples were randomly divided into the calibration set and the prediction set in a ratio of 2:1. Principal component analysis (PCA) was applied to explore the separability of different seeds based on the spectral characteristics of okra seeds. Fourteen and 86 characteristic wavelengths were extracted by using the successive projection algorithm (SPA) and competitive adaptive reweighted sampling (CARS), respectively. Another 14 characteristic wavelengths were extracted by using CARS combined with SPA. Partial least squares discriminant analysis (PLS-DA) and support vector machine (SVM) were developed based on the characteristic wavelength and full-band spectroscopy. The experimental results showed that the SVM discriminant model worked well and that the correct recognition rate was over 93.62% based on full-band spectroscopy. As for the discriminative model that was based on characteristic wavelength, the SVM model based on the CARS algorithm was better than the other two models. Combining the CARS+SVM calibration model and image processing technology, a pseudo-color map of sample prediction was generated, which could intuitively identify the species of okra seeds. The whole process provided a new idea for agricultural breeding in the rapid screening and identification of hybrid okra seeds.
Background
Rice bacterial blight (BB) has caused serious damage in rice yield and quality leading to huge economic loss and food safety problems. Breeding disease resistant cultivar becomes the eco-friendliest and most effective alternative to regulate its outburst, since the propagation of pathogenic bacteria is restrained. However, the BB resistance cultivar selection suffers tremendous labor cost, low efficiency, and subjective human error. And dynamic rice BB phenotyping study is absent from exploring the pattern of BB growth with different genotypes.
Results
In this paper, with the aim of alleviating the labor burden of plant breeding experts in the resistant cultivar screening processing and exploring the disease resistance phenotyping variation pattern, visible/near-infrared (VIS–NIR) hyperspectral images of rice leaves from three varieties after inoculation were collected and sent into a self-built deep learning model LPnet for disease severity assessment. The growth status of BB lesion at the time scale was fully revealed. On the strength of the attention mechanism inside LPnet, the most informative spectral features related to lesion proportion were further extracted and combined into a novel and refined leaf spectral index. The effectiveness and feasibility of the proposed wavelength combination were verified by identifying the resistant cultivar, assessing the resistant ability, and spectral image visualization.
Conclusions
This study illustrated that informative VIS–NIR spectrums coupled with attention deep learning had great potential to not only directly assess disease severity but also excavate spectral characteristics for rapid screening disease resistant cultivars in high-throughput phenotyping.
The object detection method based on deep learning convolutional neural network (CNN) significantly improves the detection performance of wheat head on wheat images obtained from the near ground. Nevertheless, for wheat head images of different stages, high density, and overlaps captured by the aerial-scale unmanned aerial vehicle (UAV), the existing deep learning-based object detection methods often have poor detection effects. Since the receptive field of CNN is usually small, it is not conducive to capture global features. The visual Transformer can capture the global information of an image; hence we introduce Transformer to improve the detection effect and reduce the computation of the network. Three object detection networks based on Transformer are designed and developed, including the two-stage method FR-Transformer and the one-stage methods R-Transformer and Y-Transformer. Compared with various other prevalent object detection CNN methods, our FR-Transformer method outperforms them by 88.3% for AP50 and 38.5% for AP75. The experiments represent that the FR-Transformer method can gratify requirements of rapid and precise detection of wheat heads by the UAV in the field to a certain extent. These more relevant and direct information provide a reliable reference for further estimation of wheat yield.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.