All over the world, organizations are increasingly considering the adoption of open source software and open data. In the geospatial domain, this is no different, and the last few decades have seen significant advances in this regard. We review the current state of open source geospatial software, focusing on the Open Source Geospatial Foundation (OSGeo) software ecosystem and its communities, as well as three kinds of open geospatial data (collaboratively contributed, authoritative and scientific). The current state confirms that openness has changed the way in which geospatial data are collected, processed, analyzed, and visualized. A perspective on future developments, informed by responses from professionals in key organizations in the global geospatial community, suggests that open source geospatial software and open geospatial data are likely to have an even more profound impact in the future.
During the search for spatial data resources, users, both experts and non-experts in the geoinformation field, are expected to know what type of spatial data resource they need, and in which clearinghouse or geoportal to search. In the case of success, they are still left with the decision on fitness for use, based on complex metadata, for the few cases where such metadata exists. To aid the search for spatial data resources, we propose a system for guided search for spatial data resources called GUESS. This system enhances current search engines with decision intelligence on fitness for use. GUESS works with profiles that contain data about users and about spatial data resources. From a free-form search request, GUESS identifies the spatial extent and the application domain, and searches for spatial data resources that comply with quality requirements in that domain. As a result, GUESS recommends a spatial data resource that best fits the user's needs. We illustrate capabilities of the proposed system on a request by a fictional user of a spatial data resource that is frequently present in the geoinformation world.
This document provides background for and summarizes main takeaways of a workshop held virtually to kick off the development of community guidelines for consistently curating and representing dataset quality information in a way that is in line with the FAIR principles.
Abstract. FAIR, which stands for Findable, Accessible, Interoperable and Reusable, are the main principles adopted for sharing scientific data across communities. Implementing FAIR principles in publishing increases the value of digital resources, and the reuse of these by humans as well as machines. Introducing FAIR practices to the geospatial domain is especially relevant for the foundation geospatial data, such as precise positioning data. Within the next five years, Global Navigation Satellite Systems (GNSS), with corrections from internet or satellite communications, will permit national coverage of positioning services with real-time accuracy of several centimetres or better. However, implementing FAIR principles is not yet common practice in the geospatial domain. There are dozens of standards available for defining and sharing geospatial data. These include the ISO 19100 series of standards, OGC specifications and several community profiles and best practice. However, in most cases these standards fall short in ensuring the FAIR distribution of geospatial resources. As our preliminary findings show, current geodetic metadata and data are not yet fully FAIR and data discovery and access is still very challenging. In this paper we discuss the concept of FAIR and its meaning for geodetic data, explore the needs of precise positioning users and their requirement for metadata and present preliminary results on the FAIRness of current geodetic standards.
Photogrammetric documentation can provide a sound database for the needs of architectural heritage preservation. However, the major part of photogrammetric documentation production is not used for subsequent architectural heritage projects, due to lack of knowledge of photogrammetric documentation accuracy. In addition, there are only a few studies with rigorous analysis of the requirements for photogrammetric documentation of architectural heritage. In particular, requirements focusing on the geometry of the models generated by fully digital photogrammetric processes are missing. Considering these needs, this paper presents a procedure for architectural heritage documentation with photogrammetric techniques based on a previous review of existing standards of architectural heritage documentation. The data product specification proposed was elaborated conforming to ISO 19131 recommendations. We present the procedure with two case studies in the context of Brazilian architectural heritage documentation. Quality analysis of the produced models were performed considering ISO 19157 elements, such as positional accuracy, logical consistency and completeness, meeting the requirements. Our results confirm that the proposed requirements for photogrammetric documentation are viable.
OPEN ACCESSRemote Sens. 2015, 7 13338
Knowledge about the quality of data and metadata is important to support informed decisions on the (re)use of individual datasets and is an essential part of the ecosystem that supports open science. Quality assessments reflect the reliability and usability of data. They need to be consistently curated, fully traceable, and adequately documented, as these are crucial for sound decision-and policy-making efforts that rely on data. Quality assessments also need to be consistently represented and readily integrated across systems and tools to allow for improved sharing of information on quality at the dataset level for individual quality attribute or dimension. Although the need for assessing the quality of data and associated information is well recognized, methodologies for an evaluation framework and presentation of resultant quality information to end users may not have been comprehensively addressed within and across disciplines. Global interdisciplinary domain experts have come together to systematically explore needs, challenges and impacts of consistently curating and representing quality information through the entire lifecycle of a dataset. This paper GE PENG
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.