Abstract. Plastic is the third world’s most produced material by industry (after concrete and steel), but people recycle only 9% of plastic that they have used. The other parts are either burned or accumulated in landfills and in the environment, the latter being the cause of many serious consequences, in particular when considering a long-term scenario. A significant part the plastic waste is dispersed in the aquatic environment, having a dramatic impact on the aquatic flora and fauna. This motivated several works aiming at the development of methodologies and automatic or semi-automatic tools for the plastic pollution detection, in order to enable and facilitate its recovery. This paper deals with the problem of plastic waste automatic detection in the fluvial and aquatic environment. The goal is that of exploiting the well-recognized potential of machine learning tools in object detection applications. A machine learning tool, based on random forest classifiers, has been developed to properly detect plastic objects in multi-spectral imagery collected by an unmanned aerial vehicle (UAV). In the developed approach, the outcome is determined by the combination of two random forest classifiers and of an area-based selection criterion. The approach is tested on 154 images collected by a multi-spectral proximity sensor, namely the MAIA-S2 camera, in a fluvial environment, on the Arno river (Italy), where an artificial controlled scenario was created by introducing plastic samples anchored to the ground. The obtained results are quite satisfactory in terms of object detection accuracy and recall (both higher than 98%), while presenting a remarkably lower performance in terms of precision and quality. The overall performance appears also to be dependent on the UAV flight altitude, being worse at higher altitudes, as expected.
Abstract. The development of remote sensing techniques dramatically improved the human knowledge of natural phenomena and the real time monitoring and interpretation of the events happening in the environment. The recently developed terrestrial, aerial and satellite remote sensors caused the availability of huge amount of data. The large size of such data is leading the research community to the search for efficient methods for real time information extraction, and, more in general, understanding the collected data. Nowadays, this is typically done by means of artificial intelligence-based methods, and, more specifically, usually by means of machine learning tools. Focusing on semantic segmentation, which is clearly related to a proper interpretation of the acquired remote sensing data, supervised machine learning is often used: it is based on the availability of a set of ground truth labeled data, which are used in order to properly train a machine learning classifier. Despite the latter, after a proper training phase, usually allows to obtain quite effective segmentation results, the ground truth labeled data production is usually a very laborious and time consuming task, performed by a human operator. Motivated by the latter consideration, this work aims at introducing a graphical interface developed in order to support semi-automatic semantic segmentation of images acquired by a UAS. Certain of the potentialities of the proposed graphical are shown in the specific case of plastic litter detection in multi-spectral images.
Abstract. Over the past decade, the use of machine learning and deep learning algorithms to support 3D semantic segmentation of point clouds has significantly increased, and their impressive results has led to the application of such algorithms for the semantic modeling of heritage buildings. Nevertheless, such applications still face several significant challenges, caused in particular by the high number of training data required during training, by the lack of specific data in the heritage building scenarios, and by the time-consuming operations to data collection and annotation. This paper aims to address these challenges by proposing a workflow for synthetic image data generation in heritage building scenarios. Specifically, the procedure allows for the generation of multiple rendered images from various viewpoints based on a 3D model of a building. Additionally, it enables the generation of per-pixel segmentation maps associated with these images. In the first part, the procedure is tested by generating a synthetic simulation of a real-world scenario using the case study of Spedale del Ceppo. In the second part, several experiments are conducted to assess the impact of synthetic data during training. Specifically, three neural network architectures are trained using the generated synthetic images, and their performance in predicting the corresponding real scenarios is evaluated.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.