Abstract. Current approaches to field phenotyping are laborious or permit the use of only a few sensors at a time. In an effort to overcome this, a fully automated robotic field phenotyping platform with a dedicated sensor array that may be accurately positioned in three dimensions and mounted on fixed rails has been established, to facilitate continual and high-throughput monitoring of crop performance. Employed sensors comprise of high-resolution visible, chlorophyll fluorescence and thermal infrared cameras, two hyperspectral imagers and dual 3D laser scanners. The sensor array facilitates specific growth measurements and identification of key growth stages with dense temporal and spectral resolution. Together, this platform produces a detailed description of canopy development across the crops entire lifecycle, with a high-degree of accuracy and reproducibility.
UMR AGAP - équipe AFEF - Architecture et fonctionnement des espèces fruitièresNumerous agronomical applications of remote sensing have been proposed in recent years, including water stress assessment at field by thermal imagery. The miniaturization of thermal cameras allows carrying them onboard the unmanned aerial vehicles (UAVs), but these systems have no temperature control and, consequently, drifts during data acquisition have to be carefully corrected. This manuscript presents a comprehensive methodology for radiometric correction of UAV remotely-sensed thermal images to obtain (combined with visible and near-infrared data) multispectral ortho-mosaics, as a previous step for further image-based assessment of tree response to water stress. On summer 2013, UAV flights were performed over an apple tree orchard located in Southern France, and 4 dates and 5 h of the day were tested. The 6400 m2 field plot comprised 520 apple trees, half well-irrigated and half submitted to progressive summer water stress. Temperatures of four different on-ground stable reference targets were continuously measured by thermo-radiometers for radiometric calibration purposes. By using self-developed software, frames were automatically extracted from the thermal video files, and then radiometrically calibrated using the thermal targets data. Once ortho-mosaics were obtained, root mean squared error (RMSE) was calculated. The accuracy obtained allowed multi-temporal mosaic comparison. Results showed a good relationship between calibrated images and on-ground data. Significantly higher canopy temperatures were found in water-stressed trees compared to well-irrigated ones. As high resolution field ortho-mosaics were obtained, comparison between trees opens the possibility of using multispectral data as phenotypic variables for the characterization of individual plant response to drought
Recording growth stage information is an important aspect of precision agriculture, crop breeding and phenotyping. In practice, crop growth stage is still primarily monitored by-eye, which is not only laborious and time-consuming, but also subjective and error-prone. The application of computer vision on digital images offers a high-throughput and non-invasive alternative to manual observations and its use in agriculture and high-throughput phenotyping is increasing. This paper presents an automated method to detect wheat heading and flowering stages, which uses the application of computer vision on digital images. The bag-of-visual-word technique is used to identify the growth stage during heading and flowering within digital images. Scale invariant feature transformation feature extraction technique is used for lower level feature extraction; subsequently, local linear constraint coding and spatial pyramid matching are developed in the mid-level representation stage. At the end, support vector machine classification is used to train and test the data samples. The method outperformed existing algorithms, having yielded 95.24, 97.79, 99.59% at early, medium and late stages of heading, respectively and 85.45% accuracy for flowering detection. The results also illustrate that the proposed method is robust enough to handle complex environmental changes (illumination, occlusion). Although the proposed method is applied only on identifying growth stage in wheat, there is potential for application to other crops and categorization concepts, such as disease classification.
Crop yield is an essential measure for breeders, researchers, and farmers and is composed of and may be calculated by the number of ears per square meter, grains per ear, and thousand grain weight. Manual wheat ear counting, required in breeding programs to evaluate crop yield potential, is labor-intensive and expensive; thus, the development of a real-time wheat head counting system would be a significant advancement. In this paper, we propose a computationally efficient system called DeepCount to automatically identify and count the number of wheat spikes in digital images taken under natural field conditions. The proposed method tackles wheat spike quantification by segmenting an image into superpixels using simple linear iterative clustering (SLIC), deriving canopy relevant features, and then constructing a rational feature model fed into the deep convolutional neural network (CNN) classification for semantic segmentation of wheat spikes. As the method is based on a deep learning model, it replaces hand-engineered features required for traditional machine learning methods with more efficient algorithms. The method is tested on digital images taken directly in the field at different stages of ear emergence/maturity (using visually different wheat varieties), with different canopy complexities (achieved through varying nitrogen inputs) and different heights above the canopy under varying environmental conditions. In addition, the proposed technique is compared with a wheat ear counting method based on a previously developed edge detection technique and morphological analysis. The proposed approach is validated with image-based ear counting and ground-based measurements. The results demonstrate that the DeepCount technique has a high level of robustness regardless of variables, such as growth stage and weather conditions, hence demonstrating the feasibility of the approach in real scenarios. The system is a leap toward a portable and smartphone-assisted wheat ear counting systems, results in reducing the labor involved, and is suitable for high-throughput analysis. It may also be adapted to work on Red; Green; Blue (RGB) images acquired from unmanned aerial vehicle (UAVs).
BackgroundAccurately segmenting vegetation from the background within digital images is both a fundamental and a challenging task in phenotyping. The performance of traditional methods is satisfactory in homogeneous environments, however, performance decreases when applied to images acquired in dynamic field environments.ResultsIn this paper, a multi-feature learning method is proposed to quantify vegetation growth in outdoor field conditions. The introduced technique is compared with the state-of the-art and other learning methods on digital images. All methods are compared and evaluated with different environmental conditions and the following criteria: (1) comparison with ground-truth images, (2) variation along a day with changes in ambient illumination, (3) comparison with manual measurements and (4) an estimation of performance along the full life cycle of a wheat canopy.ConclusionThe method described is capable of coping with the environmental challenges faced in field conditions, with high levels of adaptiveness and without the need for adjusting a threshold for each digital image. The proposed method is also an ideal candidate to process a time series of phenotypic information throughout the crop growth acquired in the field. Moreover, the introduced method has an advantage that it is not limited to growth measurements only but can be applied on other applications such as identifying weeds, diseases, stress, etc.Electronic supplementary materialThe online version of this article (10.1186/s13007-017-0253-8) contains supplementary material, which is available to authorized users.
Highlight textThermal infrared imagery contributes to the phenotyping of crop response to water stress. Based on multispectral images, the Vegetation Index–Temperature (VIT) concept constitutes a relevant approach.
Genetic studies increasingly rely on high-throughput phenotyping, but the resulting longitudinal data pose analytical challenges. We used canopy height data from an automated field phenotyping platform to compare several approaches to scanning for quantitative trait loci (QTLs) and performing genomic prediction in a wheat recombinant inbred line mapping population based on up to 26 sampled time points (TPs). We detected four persistent QTLs (i.e. expressed for most of the growing season), with both empirical and simulation analyses demonstrating superior statistical power of detecting such QTLs through functional mapping approaches compared with conventional individual TP analyses. In contrast, even very simple individual TP approaches (e.g. interval mapping) had superior detection power for transient QTLs (i.e. expressed during very short periods). Using spline-smoothed phenotypic data resulted in improved genomic predictive abilities (5–8% higher than individual TP prediction), while the effect of including significant QTLs in prediction models was relatively minor (<1–4% improvement). Finally, although QTL detection power and predictive ability generally increased with the number of TPs analysed, gains beyond five or 10 TPs chosen based on phenological information had little practical significance. These results will inform the development of an integrated, semi-automated analytical pipeline, which will be more broadly applicable to similar data sets in wheat and other crops.
HighlightThis research successfully used image-based spectral indices acquired in the field to assess variability of response to drought in a tree mapping population and to detect the related genetic determinisms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.