We develop a new algorithm for the computation of the geometrical shock dynamics model (GSD). The method relies on the fast-marching paradigm and enables the discrete evaluation of the first arrival time of a shock wave and its local velocity on a cartesian grid. The proposed algorithm is based on a second order upwind finite difference scheme and reduces to a local nonlinear system of two equations solved by an iterative procedure. Reference solutions are built for a smooth radial configuration and for the 2D Riemann problem. The link between the GSD model and p-systems is given. Numerical experiments demonstrate the accuracy and the ability of the scheme to handle singularities.
The increasing availability of large-scale Global Positioning System (GPS) data stemming from in-vehicle embedded terminal devices enables the design of methods deriving road network cartographic information from drivers' recorded traces. Some machine learning approaches have been proposed in the past to train automatic road network map inference, and recently this approach has been successfully extended to infer road attributes as well, such as speed limitation or number of lanes. In this paper, we address the problem of detecting traffic signals from a set of vehicle speed profiles, under a classification perspective. Each data instance is a speed versus distance plot depicting over a hundred profiles on a 100-meter-long road span. We proposed three different ways of deriving features: the first one relies on the raw speed measurements; the second one uses image recognition techniques; and the third one is based on functional data analysis. We input them into most commonly used classification algorithms and a comparative analysis demonstrated that a functional description of speed profiles with wavelet transforms seems to outperform the other approaches with most of the tested classifiers. It also highlighted
Abstract. Data quality assessment of OpenStreetMap (OSM) data can be carried out by comparing them with a reference spatial data (e.g authoritative data). However, in case of a lack of reference data, the spatial accuracy is unknown. The aim of this work is therefore to propose a framework to infer relative spatial accuracy of OSM data by using machine learning methods. Our approach is based on the hypothesis that there is a relationship between extrinsic and intrinsic quality measures. Thus, starting from a multi-criteria data matching, the process seeks to establish a statistical relationship between measures of extrinsic quality of OSM (i.e. obtained by comparison with reference spatial data) and the measures of intrinsic quality of OSM (i.e. OSM features themselves) in order to estimate extrinsic quality on an unevaluated OSM dataset. The approach was applied on OSM buildings. On our dataset, the resulting regression model predicts the values on the extrinsic quality indicators with 30% less variance than an uninformed predictor.
Abstract. Importing spatial open data in OpenStreetMap (OSM) project, is a practice that has existed from the beginning of the project. The rapid development and multiplication of collaborative mapping tools and open data have led to the growth of the phenomenon of importing massive data into OSM. The goal of this paper is to study the evolution of the massive imports over time. We propose an approach in three steps: classification of the sources used to edit features in the OSM platform including those massively imported, classification of modifications, and identification of evolution patterns. The approach is mixing global analysis (i.e. sources and modifications are classified) and feature based analysis (i.e. imported features are analyzed with respect to their evolution over time). The approach is applied on three datasets coming from OSM considered for their heterogeneity in terms of complexity, imports, and spatial and temporal characteristics. The results show that there is a sustained activity of edition on imported features, with a ratio between geometry editions and semantic editions depending on the type of the features, with roads being the features concentrating the most activity.
Background: The spatio-temporal analysis of cases is a good way an epidemic, and the recent COVID-19 pandemic unfortunately generated a huge amount of data. But analysing this raw data, with for instance the address of the people who contracted COVID-19, raises some privacy issues, and geomasking is necessary to preserve both people privacy and the spatial accuracy required for analysis. This paper proposes di erent geomasking techniques adapted to this COVID-19 data.Methods: Different techniques are adapted from the literature, and tested on a synthetic dataset mimicking the COVID-19 spatio-temporal spreading in Paris and a more rural nearby region. Theses techniques are assessed in terms of k-anonymity and cluster preservation.Results: Three adapted geomasking techniques are proposed: aggregation, bimodal gaussian perturbation, and simulated crowding. All three can be useful in different use cases, but the bimodal gaussian perturbation is the overall best techniques, and the simulated crowding is the most promising one, provided some improvements are introduced to avoid points with a low k-anonymity.Conclusions: It is possible to use geomasking techniques on addresses of people who caught COVID-19, while preserving the important spatial patterns.
Abstract. In this paper, we describe a framework to find a good quality waste collection tour after a flood, without having to solve a complicated optimization problem from scratch in limited time. We model the computation of a waste collection tour as a capacitated routing problem, on the vertices or on the edges of a graph, with uncertain waste quantities and uncertain road availability. Multiple models have been conceived to manage uncertainty in routing problems, and we build on the ideas of discretizing the uncertain parameters and computing master solutions that can be adapted to propose an original method to compute efficient solutions. We first introduce our model for the progressive removal of the uncertainty, then outline our method to compute solutions: our method first considers a low-dimensional set of random variables that govern the behaviour of the problem parameters, discretizes these variables and computes a solution for each discrete point before the flood, and then uses these solutions as a basis to build operational solutions when there are enough information about the parameters of the routing problem. We then give computational tools to implement this method. We give a framework to compute the basis of solutions in an efficient way, by computing all the solutions simultaneously and sharing information (that can lead to good quality solutions) between the different problems based on how close their parameters are, and we also describe how real solutions can be derived from this basis. Our main contributions are our model for the progressive removal of uncertainty, our multi-step method to compute efficient solutions, and our intrusive framework to compute solutions on the discrete grid of parameters.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.