2017
DOI: 10.3390/data2040035
|View full text |Cite
|
Sign up to set email alerts
|

Earth Observation for Citizen Science Validation, or Citizen Science for Earth Observation Validation? The Role of Quality Assurance of Volunteered Observations

Abstract: Environmental policy involving citizen science (CS) is of growing interest. In support ofthis open data stream of information, validation or quality assessment of the CS geo-located data to their appropriate usage for evidence-based policy making needs a flexible and easily adaptable data curation process ensuring transparency. Addressing these needs, this paper describes an approach for automatic quality assurance as proposed by the Citizen OBservatory WEB (COBWEB) FP7 project. This approach is based upon a w… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
5
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 35 publications
1
5
0
Order By: Relevance
“…All 530 observations elevated to research grade through the iNaturalist crowdsourcing approach were accurate, which supports other studies that concluded that crowdsourcing is the best method to identify errors in biodiversity data (Goodchild and Li 2012). Other studies have assessed the accuracy of CS-verified photographic data but resulted in similar but slightly lower accuracy for species identification (Swanson et al 2015;Leibovici et al 2017). For example, the Swanson et al study (2015) evaluated the Snapshot Serengeti volunteers' identifications of 48 different species and species groups (n = 4,149) of African wildlife, which resulted in 97% accuracy for species identifications with an accuracy rate that varied by species (Swanson et al 2015(Swanson et al , 2016.…”
Section: Verificationsupporting
confidence: 78%
“…All 530 observations elevated to research grade through the iNaturalist crowdsourcing approach were accurate, which supports other studies that concluded that crowdsourcing is the best method to identify errors in biodiversity data (Goodchild and Li 2012). Other studies have assessed the accuracy of CS-verified photographic data but resulted in similar but slightly lower accuracy for species identification (Swanson et al 2015;Leibovici et al 2017). For example, the Swanson et al study (2015) evaluated the Snapshot Serengeti volunteers' identifications of 48 different species and species groups (n = 4,149) of African wildlife, which resulted in 97% accuracy for species identifications with an accuracy rate that varied by species (Swanson et al 2015(Swanson et al , 2016.…”
Section: Verificationsupporting
confidence: 78%
“…The validation process is an important step that should be properly handled. Citizen science approaches complemented with the use of aerial and/or on-the-ground imagery can be used to validate the identification of green areas as well as identifying private and public space from LUCAS database pictures [77,78]. It could help to build a tool able to efficiently and accurately classify and differentiate private green space from public green space.…”
mentioning
confidence: 99%
“…As OSM data still lack completeness for many of its object attributes in many places, there is an urgent need to find alternatives for filling this information gap. Among potential alternatives, citizen science campaigns can be a promising approach to identify public/private areas using aerial and/or on-theground images available in public databases (e.g., Flickr) or the Land Use and Coverage Area frame Survey (LUCAS) [34,35]. In addition, using patterns of mobile phone data (e.g., number and pattern over time of mobile phone users in a dedicated space) can also help detecting if a given green area is publicly accessible or not.…”
Section: Discussionmentioning
confidence: 99%