OpenStreetMap (OSM) is a collaborative project collecting geographical data of the entire world. The level of detail of OSM data and its data quality vary much across different regions and domains. In order to analyse such variations it is often necessary to research the history and evolution of the OSM data. The OpenStreetMap History Database (OSHDB) is a new data analysis tool for spatio-temporal geographical vector data. It is specifically optimized for working with OSM history data on a global scale and allows one to investigate the data evolution and user contributions in a flexible way. Benefits of the OSHDB are for example: to facilitate accessing OSM history data as a research subject and to assess the quality of OSM data by using intrinsic measures. This article describes the requirements of such a system and the resulting technical implementation of the OSHDB: the OSHDB data model and its application programming interface.
Reliable techniques to generate accurate data sets of human built-up areas at national, regional, and global scales are a key factor to monitor the implementation progress of the Sustainable Development Goals as defined by the United Nations. However, the scarce availability of accurate and up-to-date human settlement data remains a major challenge, e.g., for humanitarian organizations. In this paper, we investigated the complementary value of crowdsourcing and deep learning to fill the data gaps of existing earth observation-based (EO) products. To this end, we propose a novel workflow to combine deep learning (DeepVGI) and crowdsourcing (MapSwipe). Our strategy for allocating classification tasks to deep learning or crowdsourcing is based on confidence of the derived binary classification. We conducted case studies in three different sites located in Guatemala, Laos, and Malawi to evaluate the proposed workflow. Our study reveals that crowdsourcing and deep learning outperform existing EO-based approaches and products such as the Global Urban Footprint. Compared to a crowdsourcing-only approach, the combination increased the quality (measured by Matthew’s correlation coefficient) of the generated human settlement maps by 3 to 5 percentage points. At the same time, it reduced the volunteer efforts needed by at least 80 percentage points for all study sites. The study suggests that for the efficient creation of human settlement maps, we should rely on human skills when needed and rely on automated approaches when possible.
Interface theories are employed in the component-based design of concurrent systems. They often emerge as combinations of Interface Automata (IA) and Modal Transition Systems (MTS), e.g., Nyman et al.'s IOMTS, Bauer et al.'s MIO, Raclet et al.'s MI or our MIA. In this paper, we generalise MI to nondeterministic interfaces, for which we resolve the longstanding conflict between unspecified inputs being allowed in IA but forbidden in MTS. With this solution we achieve, in contrast to related work, an associative parallel composition, a compositional preorder, a conjunction on interfaces with dissimilar alphabets supporting perspective-based specifications, and a quotienting operator for decomposing nondeterministic specifications in a single theory.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.