The study presents the analysis of the possible use of limited number of the Sentinel-2 and Sentinel-1 to check if crop declarations that the EU farmers submit to receive subsidies are true. The declarations used in the research were randomly divided into two independent sets (training and test). Based on the training set, supervised classification of both single images and their combinations was performed using random forest algorithm in SNAP (ESA) and our own Python scripts. A comparative accuracy analysis was performed on the basis of two forms of confusion matrix (full confusion matrix commonly used in remote sensing and binary confusion matrix used in machine learning) and various accuracy metrics (overall accuracy, accuracy, specificity, sensitivity, etc.). The highest overall accuracy (81%) was obtained in the simultaneous classification of multitemporal images (three Sentinel-2 and one Sentinel-1). An unexpectedly high accuracy (79%) was achieved in the classification of one Sentinel-2 image at the end of May 2018. Noteworthy is the fact that the accuracy of the random forest method trained on the entire training set is equal 80% while using the sampling method ca. 50%. Based on the analysis of various accuracy metrics, it can be concluded that the metrics used in machine learning, for example: specificity and accuracy, are always higher then the overall accuracy. These metrics should be used with caution, because unlike the overall accuracy, to calculate these metrics, not only true positives but also false positives are used as positive results, giving the impression of higher accuracy. Correct calculation of overall accuracy values is essential for comparative analyzes. Reporting the mean accuracy value for the classes as overall accuracy gives a false impression of high accuracy. In our case, the difference was 10–16% for the validation data, and 25–45% for the test data.
The paper presents results of the project: Cultural Heritage Through Time (CHT 2, http://cht2-project.eu) realized accomplished within the framework of the “Joint Programming Initiative in Cultural Heritage” JPI-CH (http://www.jpi-culturalheritage.eu) by an international consortium: Politecnico di Milano (IT), Newcastle University (UK), Salamanca University (ES), and Stanislaw Staszic Scientific Association SSSA (a non-profit organization), (PL). The aim of the project was the integration of 3D models of buildings, cities and landscapes for monitoring and preservation of the cultural heritage. The research was conducted on three levels of detail according to the CityGML standard: LoD0 – regional, landscape scale, LoD1/LoD2 – urban scale, LoD3 – architectural scale, level of detail - building outside. Basing of this assumption, four test sites were selected: i. the city centre of Milan (IT) – urban scale, ii. the medieval walls and the historic centre of Avila (ES) - urban/architecture scale, iii. Hadrian’s Wall and its landscape (UK) – landscape scale, iv. the Fortress Cracow Krakow (PL) - architectural scale. Final 4D models were published on the Internet. The paper presents state-of-the art of the technology of 4D models sharing on the internet. 4D models were understood as 3D models solid one and point clouds changing through the time. Results of the practical initial tests of different software (commercial: Hexagon and CityEngine from ESRI and open-source: 3DHOP and Potree) are also shown. Web site: https://cht2.eu was created where the 4D models are placed for all the project partners.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.