Abstract:The monitoring of crops is of vital importance for food and environmental security in a global and European context. The main goal of this study was to assess the crop mapping performance provided by the 100 m spatial resolution of PROBA-V compared to coarser resolution data (e.g., PROBA-V at 300 m) for a 2250 km 2 test site in Bulgaria. The focus was on winter and summer crop mapping with three to five classes. For classification, single-and multi-date spectral data were used as well as NDVI time series. Our results demonstrate that crop identification using 100 m PROBA-V data performed significantly better in all experiments compared to the PROBA-V 300 m data. PROBA-V multispectral imagery, acquired in spring (March) was the most appropriate for winter crop identification, while satellite data acquired in summer (July) was superior for summer crop identification. The classification accuracy from PROBA-V 100 m compared to PROBA-V 300 m was improved by 5.8% to 14.8% depending on crop type. Stacked multi-date satellite images with three to four images gave overall classification accuracies of 74%-77% (PROBA-V 100 m data) and 66%-70% (PROBA-V 300 m data) with four OPEN ACCESSRemote Sens. 2015, 7 13844 classes (wheat, rapeseed, maize, and sunflower). This demonstrates that three to four image acquisitions, well distributed over the growing season, capture most of the spectral and temporal variability in our test site. Regarding the PROBA-V NDVI time series, useful results were only obtained if crops were grouped into two broader crop type classes (summer and winter crops). Mapping accuracies decreased significantly when mapping more classes. Again, a positive impact of the increased spatial resolution was noted.Together, the findings demonstrate the positive effect of the 100 m resolution PROBA-V data compared to the 300 m for crop mapping. This has important implications for future data provision and strengthens the arguments for a second generation of this mission originally designed solely as a "gap-filler mission".
The utility of unmanned aerial vehicles (UAV) imagery in retrieving phenotypic data to support plant breeding research has been a topic of increasing interest in recent years. The advantages of image-based phenotyping are related to the high spatial and temporal resolution of the retrieved data and the non-destructive and rapid method of data acquisition. This study trains parametric and nonparametric regression models to retrieve leaf area index (LAI), fraction of absorbed photosynthetically active radiation (fAPAR), fractional vegetation cover (fCover), leaf chlorophyll content (LCC), canopy chlorophyll content (CCC), and grain yield (GY) of winter durum wheat breeding experiment from four-bands UAV images. A ground dataset, collected during two field campaigns and complemented with data from a previous study, is used for model development. The dataset is split at random into two parts, one for training and one for testing the models. The tested parametric models use the vegetation index formula and parametric functions. The tested nonparametric models are partial least square regression (PLSR), random forest regression (RFR), support vector regression (SVR), kernel ridge regression (KRR), and Gaussian processes regression (GPR). The retrieved biophysical variables along with traditional phenotypic traits (plant height, yield, and tillering) are analysed for detection of genetic diversity, proximity, and similarity in the studied genotypes. Analysis of variance (ANOVA), Duncan’s multiple range test, correlation analysis, and principal component analysis (PCA) are performed with the phenotypic traits. The parametric and nonparametric models show close results for GY retrieval, with parametric models indicating slightly higher accuracy (R2 = 0.49; MRSE = 0.58 kg/plot; rRMSE = 6.1%). However, the nonparametric model, GPR, computes per pixel uncertainty estimation, making it more appealing for operational use. Furthermore, our results demonstrate that grain filling was better than flowering phenological stage to predict GY. The nonparametric models show better results for biophysical variables retrieval, with GPR presenting the highest prediction performance. Nonetheless, robust models are found only for LAI (R2 = 0.48; MRSE = 0.64; rRMSE = 13.5%) and LCC (R2 = 0.49; MRSE = 31.57 mg m−2; rRMSE = 6.4%) and therefore these are the only remotely sensed phenotypic traits included in the statistical analysis for preliminary assessment of wheat productivity. The results from ANOVA and PCA illustrate that the retrieved remotely sensed phenotypic traits are a valuable addition to the traditional phenotypic traits for plant breeding studies. We believe that these preliminary results could speed up crop improvement programs; however, stronger interdisciplinary research is still needed, as well as uncertainty estimation of the remotely sensed phenotypic traits.
This paper presents the results of a sub-pixel classification of crop types in Bulgaria from PROBA-V 100 m normalized difference vegetation index (NDVI) time series. Two sub-pixel classification methods, artificial neural network (ANN) and support vector regression (SVR) were used where the output was a set of area fraction images (AFIs) at 100 m resolution with pixels containing estimated area fractions of each class. High-resolution maps of two test sites derived from Sentinel-2 classifications were used to obtain training data for the sub-pixel classifications. The estimated area fractions have a good correspondence with the true area fractions when aggregated to regions of 10 × 10 km 2 , especially when the SVR method was used. For the five dominant classes in the test sites the R 2 obtained after the aggregation was 86% (winter cereals), 81% (sunflower), 92% (broad-leaved forest), 89% (maize), and 67% (grasslands) when the SVR method was used.including the linear mixture model (LMM) [4], artificial neural network (ANN) [9-11], regression tree [5], fuzzy classification [6,12], and support vector machine (SVM) [13].Liu and Wu [11] argued that the non-linear models, especially neural network-based models, outperformed the traditional linear unmixing models. Support for such a conclusion is given in Verbeiren et al.[1] who compared ANN and LMM in an attempt to generate a sub-pixel map from SPOT-VEGETATION 1 km data across Belgium. The authors showed that the ANN approach outperformed LMM and that, for the major classes, the acreage estimates obtained via ANN, when aggregated to the level of the administrative regions, were in good agreement with the true values. Also, a multilayer perceptron (MLP) neural network regression algorithm has been shown to outperform the regression tree algorithm [5]. Atkinson et al. [9] also obtained better results with ANN than with the other tested methods but pointed out that its successful implementation depends on accurate co-registration and the availability of a training data set. Liu et al.[14] compared a linear spectral unmixing model, a supervised fully-fuzzy classification method and a SVM to generate a fraction map and achieved the most accurate fraction result using SVM. Six machine learning methods were compared in a recent study [15] based on multiple criteria, where the authors found that, in general, no method performs best for all evaluation criteria. However, when both time available for preprocessing and the magnitude of the training data set are unconstrained, support vector regression (SVR) and least-squares SVM for regression clearly outperform the other methods.Regarding the satellite imagery widely used for agricultural monitoring, SPOT-VEGETATION (SPOT-VGT) sensors provided one of the longest time series of multispectral reflectance since 1998. The mission was succeeded in 2013 by PROBA-V (PRoject for On-Board Autonomy-Vegetation), a small satellite commissioned by the European Space Agency. The sensor on-board PROBA-V generates products at three different ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.