Abstract:Unmanned aerial systems (UASs) and photogrammetric structure from motion (SFM) algorithms can assist in biomass assessments in tropical countries and can be a useful tool in local greenhouse gas accounting. This study assessed the influence of image resolution, camera type and side overlap on prediction accuracy of biomass models constructed from ground-based data and UAS data in miombo woodlands in Malawi. We compared prediction accuracy of models reflecting two different image resolutions (10 and 15 cm groun… Show more
“…Also, this study did not address the impact of changes in the image spatial resolution on the panicle detection. However, previous studies on UAS imaging performance have shown that lower resolution data tend to reduce the accuracy of derived metrics e.g., plant height and biomass [69][70][71]. Other studies have also shown that lower resolution images tend to lower the performance of deep learning models [72,73].…”
Small unmanned aerial systems (UAS) have emerged as high-throughput platforms for the collection of high-resolution image data over large crop fields to support precision agriculture and plant breeding research. At the same time, the improved efficiency in image capture is leading to massive datasets, which pose analysis challenges in providing needed phenotypic data. To complement these high-throughput platforms, there is an increasing need in crop improvement to develop robust image analysis methods to analyze large amount of image data. Analysis approaches based on deep learning models are currently the most promising and show unparalleled performance in analyzing large image datasets. This study developed and applied an image analysis approach based on a SegNet deep learning semantic segmentation model to estimate sorghum panicles counts, which are critical phenotypic data in sorghum crop improvement, from UAS images over selected sorghum experimental plots. The SegNet model was trained to semantically segment UAS images into sorghum panicles, foliage and the exposed ground using 462, 250 × 250 labeled images, which was then applied to field orthomosaic to generate a field-level semantic segmentation. Individual panicle locations were obtained after post-processing the segmentation output to remove small objects and split merged panicles. A comparison between model panicle count estimates and manually digitized panicle locations in 60 randomly selected plots showed an overall detection accuracy of 94%. A per-plot panicle count comparison also showed high agreement between estimated and reference panicle counts (Spearman correlation ρ = 0.88, mean bias = 0.65). Misclassifications of panicles during the semantic segmentation step and mosaicking errors in the field orthomosaic contributed mainly to panicle detection errors. Overall, the approach based on deep learning semantic segmentation showed good promise and with a larger labeled dataset and extensive hyper-parameter tuning, should provide even more robust and effective characterization of sorghum panicle counts.
“…Also, this study did not address the impact of changes in the image spatial resolution on the panicle detection. However, previous studies on UAS imaging performance have shown that lower resolution data tend to reduce the accuracy of derived metrics e.g., plant height and biomass [69][70][71]. Other studies have also shown that lower resolution images tend to lower the performance of deep learning models [72,73].…”
Small unmanned aerial systems (UAS) have emerged as high-throughput platforms for the collection of high-resolution image data over large crop fields to support precision agriculture and plant breeding research. At the same time, the improved efficiency in image capture is leading to massive datasets, which pose analysis challenges in providing needed phenotypic data. To complement these high-throughput platforms, there is an increasing need in crop improvement to develop robust image analysis methods to analyze large amount of image data. Analysis approaches based on deep learning models are currently the most promising and show unparalleled performance in analyzing large image datasets. This study developed and applied an image analysis approach based on a SegNet deep learning semantic segmentation model to estimate sorghum panicles counts, which are critical phenotypic data in sorghum crop improvement, from UAS images over selected sorghum experimental plots. The SegNet model was trained to semantically segment UAS images into sorghum panicles, foliage and the exposed ground using 462, 250 × 250 labeled images, which was then applied to field orthomosaic to generate a field-level semantic segmentation. Individual panicle locations were obtained after post-processing the segmentation output to remove small objects and split merged panicles. A comparison between model panicle count estimates and manually digitized panicle locations in 60 randomly selected plots showed an overall detection accuracy of 94%. A per-plot panicle count comparison also showed high agreement between estimated and reference panicle counts (Spearman correlation ρ = 0.88, mean bias = 0.65). Misclassifications of panicles during the semantic segmentation step and mosaicking errors in the field orthomosaic contributed mainly to panicle detection errors. Overall, the approach based on deep learning semantic segmentation showed good promise and with a larger labeled dataset and extensive hyper-parameter tuning, should provide even more robust and effective characterization of sorghum panicle counts.
“…Some studies used spectral information [2,11,18,24,26,43,58,[60][61][62] and some structural information [1,8,22,23,28,34,48,50,55,[63][64][65]. Others used both [3][4][5][6]9,12,16,20,21,25,30,33,49,57,[66][67][68], while a few studies used spectral and structural metrics plus another data type [13,27,69] (Table A1). Within these categories, a wide range of species, study areas and methods are examined, demonstrating the applicability of UAS data to AGB estimation in agricultural and non-agricultural environments.…”
Section: Input Datamentioning
confidence: 99%
“…We found 15 research papers [1,8,19,22,23,28,34,48,50,55,[62][63][64]74,75] that used structural measurements alone and 12 papers [4,5,9,12,15,16,20,21,25,30,49,57] that used structural metrics along with spectral data to estimate biomass of vegetation (Table A1). All structural variables used by studies in this review are listed in Table 1.…”
Section: How Well Can Structural Data Estimate Vegetation Agb? Which mentioning
confidence: 99%
“…Mean height [3,9,12,13,15,16,[19][20][21]23,25,28,30,34,[48][49][50]57,58,63,65,[67][68][69][74][75][76] Maximum height [1,3,4,13,28,30,34,48,57,63,65,69] Minimum height [3,28,34,48,57,63,65,69] Median height [12,21,27,48,63,65,…”
Section: Heightmentioning
confidence: 99%
“…There is no standardized methodology for planning, collecting and analyzing these data to derive AGB information. Numerous factors related to data collection and analysis methods and the study species and area of interest have the potential to affect the accuracy and predictive capabilities of derived models [2,8,57,58]. Without careful consideration of these factors, AGB estimation may be biased or imprecise, resulting in decreased accuracy of AGB models with potentially negative consequences for inferences and management decisions made from this information.…”
Interest in the use of unmanned aerial systems (UAS) to estimate the aboveground biomass (AGB) of vegetation in agricultural and non-agricultural settings is growing rapidly but there is no standardized methodology for planning, collecting and analyzing UAS data for this purpose. We synthesized 46 studies from the peer-reviewed literature to provide the first-ever review on the subject. Our analysis showed that spectral and structural data from UAS imagery can accurately estimate vegetation biomass in a variety of settings, especially when both data types are combined. Vegetation-height metrics are useful for trees, while metrics of variation in structure or volume are better for non-woody vegetation. Multispectral indices using NIR and red-edge wavelengths normally have strong relationships with AGB but RGB-based indices often outperform them in models. Including measures of image texture can improve model accuracy for vegetation with heterogeneous canopies. Vegetation growth structure and phenological stage strongly influence model accuracy and the selection of useful metrics and should be considered carefully. Additional factors related to the study environment, data collection and analytical approach also impact biomass estimation and need to be considered throughout the workflow. Our review shows that UASs provide a capable tool for fine-scale, spatially explicit estimations of vegetation AGB and are an ideal complement to existing ground- and satellite-based approaches. We recommend future studies aimed at emerging UAS technologies and at evaluating the effect of vegetation type and growth stages on AGB estimation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.