Measurement or observation error is common in ecological data: as citizen scientists and automated algorithms play larger roles processing growing volumes of data to address problems at large scales, concerns about data quality and strategies for improving it have received greater focus. However, practical guidance pertaining to fundamental data quality questions for data users or managers—how accurate do data need to be and what is the best or most efficient way to improve it?—remains limited. We present a generalizable framework for evaluating data quality and identifying remediation practices, and demonstrate the framework using trail camera images classified using crowdsourcing to determine acceptable rates of misclassification and identify optimal remediation strategies for analysis using occupancy models. We used expert validation to estimate baseline classification accuracy and simulation to determine the sensitivity of two occupancy estimators (standard and false‐positive extensions) to different empirical misclassification rates. We used regression techniques to identify important predictors of misclassification and prioritize remediation strategies. More than 93% of images were accurately classified, but simulation results suggested that most species were not identified accurately enough to permit distribution estimation at our predefined threshold for accuracy (<5% absolute bias). A model developed to screen incorrect classifications predicted misclassified images with >97% accuracy: enough to meet our accuracy threshold. Occupancy models that accounted for false‐positive error provided even more accurate inference even at high rates of misclassification (30%). As simulation suggested occupancy models were less sensitive to additional false‐negative error, screening models or fitting occupancy models accounting for false‐positive error emerged as efficient data remediation solutions. Combining simulation‐based sensitivity analysis with empirical estimation of baseline error and its variability allows users and managers of potentially error‐prone data to identify and fix problematic data more efficiently. It may be particularly helpful for “big data” efforts dependent upon citizen scientists or automated classification algorithms with many downstream users, but given the ubiquity of observation or measurement error, even conventional studies may benefit from focusing more attention upon data quality.
Declining participation in hunting, especially among young adult hunters, affects the ability of state and federal agencies to achieve goals for wildlife management and decreases revenue for conservation. For wildlife agencies hoping to engage diverse audiences in hunter recruitment, retention, and reactivation (R3) efforts, university settings provide unique advantages: they contain millions of young adults who are developmentally primed to explore new activities, and they cultivate a social atmosphere where new identities can flourish. From 2018 to 2020, we surveyed 17,203 undergraduate students at public universities across 22 states in the United States to explore R3 potential on college campuses and assess key demographic, social, and cognitive correlates of past and intended future hunting behavior. After weighting to account for demographic differences between our sample and the larger student population, 29% of students across all states had hunted in the past. Students with previous hunting experience were likely to be white, male, from rural areas or hunting families, and pursuing degrees related to natural resources. When we grouped students into 1 of 4 categories with respect to hunting (i.e., non‐hunters [50%], potential hunters [22%], active hunters [26%], and lapsed hunters [3%]), comparisons revealed differences based on demographic attributes, beliefs, attitudes, and behaviors. Compared to active hunters, potential hunters were more likely to be females or racial and ethnic minorities, and less likely to experience social support for hunting. Potential hunters valued game meat and altruistic reasons for hunting, but they faced unique constraints due to lack of hunting knowledge and skills. Findings provide insights for marketing and programming designed to achieve R3 objectives with a focus on university students. © 2021 The Wildlife Society.
With the accelerating pace of global change, it is imperative that we obtain rapid inventories of the status and distribution of wildlife for ecological inferences and conservation planning. To address this challenge, we launched the SNAPSHOT USA project, a collaborative survey of terrestrial wildlife populations using camera traps across the United States. For our first annual survey, we compiled data across all 50 states during a 14‐week period (17 August–24 November of 2019). We sampled wildlife at 1,509 camera trap sites from 110 camera trap arrays covering 12 different ecoregions across four development zones. This effort resulted in 166,036 unique detections of 83 species of mammals and 17 species of birds. All images were processed through the Smithsonian’s eMammal camera trap data repository and included an expert review phase to ensure taxonomic accuracy of data, resulting in each picture being reviewed at least twice. The results represent a timely and standardized camera trap survey of the United States. All of the 2019 survey data are made available herein. We are currently repeating surveys in fall 2020, opening up the opportunity to other institutions and cooperators to expand coverage of all the urban–wild gradients and ecophysiographic regions of the country. Future data will be available as the database is updated at eMammal.si.edu/snapshot‐usa, as will future data paper submissions. These data will be useful for local and macroecological research including the examination of community assembly, effects of environmental and anthropogenic landscape variables, effects of fragmentation and extinction debt dynamics, as well as species‐specific population dynamics and conservation action plans. There are no copyright restrictions; please cite this paper when using the data for publication.
We provide program managers insight into considerations for launching and running a largescale, long-term citizen science project, using the Snapshot Wisconsin trail-camera project as a case study. Many citizen science projects are undertaken with a "learn as you go" approach, so there is room to better prepare program managers from the outset. We provide a comprehensive list of components making up citizen science projects, and discuss capacity needs for each component. We then quantify staff time needed throughout the project, based on our own experiences managing a long-term citizen science project with >1,000 participants. We show that total staff time and staff time devoted to certain project components vary markedly among 3 project phases: planning, growth, and maintenance. We recommend planning for 5.5 staff positions to maintain a long-term project serving a few hundred volunteers or more. The illustrated concepts can be applied by any person or group developing a volunteer-based project to prepare for logistic and funding needs across a project's lifespan. Program managers must remember that people form the backbone of any citizen science project, and the success or failure of such projects depend in large part on the user experience of volunteers. Ó 2019 The Wildlife Society.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.