OpenStreetMap (OSM) is a recent emerging area in computational science. There are several unexplored issues in the quality assessment of OSM. Firstly, researchers are using various established assessment methods by comparing OSM with authoritative dataset. However, these methods are unsuitable to assess OSM data quality in the case of the non-availability of authoritative data. In such a scenario, the intrinsic quality indicators can be used to assess the quality. Secondly, a framework for data assessment specific to different geographic information system (GIS) domains is not available. In this light, the current study presents an extension of the Quantum GIS (QGIS) processing toolbox by using existing functionalities and writing new scripts to handle spatial data. This would enable researchers to assess the completeness of spatial data using intrinsic indicators. The study also proposed a heuristic approach to test the road navigability of OSM data. The developed models are applied on Punjab (India) OSM data. The results suggest that the OSM project in Punjab (India) is progressing at a slow peace, and contributors' motivation is required to enhance the fitness of data. It is concluded that the scripts developed to provide an intuitive method to assess the OSM data based on quality indicators can be easily utilized for evaluating the fitness-of-use of the data of any region.
OpenStreetMap (OSM), based on collaborative mapping, has become a subject of great interest to the academic community, resulting in a considerable body of literature produced by many researchers. In this paper, we use Latent Semantic Analysis (LSA) to help identify the emerging research trends in OSM. An extensive corpus of 485 academic abstracts of papers published during the period 2007-2016 was used. Five core research areas and fifty research trends were identified in this study. In addition, potential future research directions have been provided to aid geospatial information scientists, technologists and researchers in undertaking future OSM research.
The meaning and purposes of web has been changing and evolving day by day. Web 2.0 encouraged more contribution by the end users. This movement provided revolutionary methods of sharing and computing data by crowdsourcing such as OpenStreetmap, also called "the wikification of maps" by some researchers. When crowdsourcing collects huge data with help of general public with varying level of mapping experience, the focus of researcher should be on analysing the data rather than collecting it. Researchers have assessed the quality of OpenStreetMap data by comparing it with proprietary data or data of governmental map agencies. This study reviews the research work for assessment of OpenStreetMap Data and also discusses about the future directions. General Terms:Assessment, OpenStreetMap
OpenStreetMap (OSM) produces a huge amount of labeled spatial data, but its quality has always been a deep concern.Numerous quality issues have been discussed in the vast literature, while the fitness of OSM for road navigability is only partly explored. Navigability depends on logical consistency, which focuses on the existence of logical contradictions within a data set. Researchers have discussed the insufficiency of established methods and the lack of a computational paradigm to assess the quality of the OSM data.To address the research gaps, the current work extended the capabilities of the Quantum GIS Processing Toolbox for assessment of spatial data. The models and scripts developed are able to assess logical consistency based on geographical topological consistency, semantic information, and morphological consistency. The established and proxy indicators are selected for measuring the logical consistency of OSM data for navigability. For empirical validation, OSM Punjab data are compared with authoritative data from HERE (proprietary) and the Remote Sensing Centre (RSC), Punjab, India. The results conclude that even the proprietary road data sets are not free from logical inconsistencies and data contributed by the masses are credible and navigable. OSM has produced better results than the RSC, but needs more crowd contributions to improve its quality. K E Y W O R D SLogical consistency, OpenStreetMap, PyQGIS, QGIS | 45 SEHRA Et Al. | INTRODUC TI ONThe technological stack of Web 2.0 has enabled the voluminous production of crowdsourced data and, in particular, geospatial data. Volunteered geographic information (VGI), supported by the technological stack of Web 2.0 and contributed by the crowd, offers an alternative intuitive method for the collection of geographic information through volunteering at little cost. VGI produces highly diversified, elaborated, topical, and contextualized spatial data. Probably one of the best-known examples of VGI is the OpenStreetMap (OSM) project, as OSM data are free to use and reduce information gaps for the availability of recent map data.During the last decade, OSM has gained maturity and numerous articles have been published. Furthermore, a research drift toward analysis and fitness for use of OSM in various application domains has been witnessed (Sehra, Singh & Rai, 2013. The OSM project produces a huge amount of labeled spatial data contributed by users. Barrington-Leigh and Millard-Ball (2017) stated that more than 40% of countries have 83% complete OSM data (fully mapped street network), among them are several developed countries.
Software effort estimation requires high accuracy, but accurate estimations are difficult to achieve. Increasingly, data mining is used to improve an organization's software process quality, e.g. the accuracy of effort estimations .There are a large number of different method combination exists for software effort estimation, selecting the most suitable combination becomes the subject of research in this paper. In this study, three simple preprocessors are taken (none, norm, log) and effort is measured using COCOMO model. Then results obtained from different preprocessors are compared and norm preprocessor proves to be more accurate as compared to other preprocessors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.