2019
DOI: 10.1002/eap.1849
|View full text |Cite
|
Sign up to set email alerts
|

Making inference with messy (citizen science) data: when are data accurate enough and how can they be improved?

Abstract: Measurement or observation error is common in ecological data: as citizen scientists and automated algorithms play larger roles processing growing volumes of data to address problems at large scales, concerns about data quality and strategies for improving it have received greater focus. However, practical guidance pertaining to fundamental data quality questions for data users or managers—how accurate do data need to be and what is the best or most efficient way to improve it?—remains limited. We present a ge… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
40
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1
1

Relationship

3
4

Authors

Journals

citations
Cited by 49 publications
(45 citation statements)
references
References 54 publications
(133 reference statements)
0
40
0
Order By: Relevance
“…Overall, local colonization appears to be slightly more likely than long distance colonization, and local colonization may lead to more stable growth because persistence appears to be more likely with occupied neighboring cells. Based upon previous work (Clare et al 2019, C. Anhalt-Depies unpublished data), the classification accuracy of black bears within images is believed to be very high (> 99% accuracy). Although the data used for fitting may include some false positives, we ignore such error here given previous evidence that its incidence is not likely to substantially skew model predictions and estimates.…”
Section: Appendix S2 Model Fitting Details For Bear Forecasting (Casmentioning
confidence: 87%
See 2 more Smart Citations
“…Overall, local colonization appears to be slightly more likely than long distance colonization, and local colonization may lead to more stable growth because persistence appears to be more likely with occupied neighboring cells. Based upon previous work (Clare et al 2019, C. Anhalt-Depies unpublished data), the classification accuracy of black bears within images is believed to be very high (> 99% accuracy). Although the data used for fitting may include some false positives, we ignore such error here given previous evidence that its incidence is not likely to substantially skew model predictions and estimates.…”
Section: Appendix S2 Model Fitting Details For Bear Forecasting (Casmentioning
confidence: 87%
“…For example, recent work (C. Anhalt-Depies, unpublished data) has suggested that camera host classifications of striped skunks are sufficiently accurate (> 97%) that there is little benefit associated with subjecting these images to crowdsourcing, while gray foxes are inaccurately classified by both camera hosts and via crowdsourcing, and may generally require expert review. Similarly, while Clare et al (2019) found that coyotes were relatively commonly misclassified via crowdsourcing, follow up work suggests that both camera hosts and crowdsourcing more accurately identify coyotes than previously recognized (C. Anhalt-Depies, unpublished data).…”
Section: Acknowledgments 426mentioning
confidence: 90%
See 1 more Smart Citation
“…These platforms connect millions of collaborators all over the world. However, one of the main concerns when making statistical inferences using data obtained via crowdsourcing is the inherent presence of misclassification or measurement errors resulting from participants’ variable skill levels and abilities (Bachrach et al., 2012; Bird et al., 2014; Clare et al., 2019; Mengersen et al., 2017; Venanzi et al., 2014). A second concern relates to spatial dependence in the data, which has been found to produce incorrect estimates in species abundance models when it is not accounted for (e.g.…”
Section: Introductionmentioning
confidence: 99%
“…Within the occupancy modelling framework, several authors have approached the issue of bias correction by means of performance measures, especially the false‐positive rates (Chambert et al., 2015; Clare et al., 2019). A recent extension suggested by Pacifici et al.…”
Section: Introductionmentioning
confidence: 99%