Citizen science has expanded rapidly over the past decades. Yet, defining citizen science and its boundaries remained a challenge, and this is reflected in the literature—for example in the proliferation of typologies and definitions. There is a need for identifying areas of agreement and disagreement within the citizen science practitioners community on what should be considered as citizen science activity. This paper describes the development and results of a survey that examined this issue, through the use of vignettes—short case descriptions that describe an activity, while asking the respondents to rate the activity on a scale from ‘not citizen science’ (0%) to ‘citizen science’ (100%). The survey included 50 vignettes, of which five were developed as clear cases of not-citizen science activities, five as widely accepted citizen science activities and the others addressing 10 factors and 61 sub-factors that can lead to controversy about an activity. The survey has attracted 333 respondents, who provided over 5100 ratings. The analysis demonstrates the plurality of understanding of what citizen science is and calls for an open understanding of what activities are included in the field.
Evaluation is a core management instrument and part of many scientific projects. Evaluation can be approached from several different angles, with distinct objectives in mind. In any project, we can evaluate the project process and the scientific outcomes, but with citizen science this does not go far enough. We need to additionally evaluate the effects of projects on the participants themselves and on society at large. While citizen science itself is still in evolution, we should aim to capture and understand the multiple traces it leaves in its direct and broader environment. Considering that projects often have limited resources for evaluation, we need to bundle existing knowledge and experiences on how to best assess citizen science initiatives and continually learn from this assessment. What should we concentrate on when we evaluate citizen science projects and programmes? What are current practices and what are we lacking? Are we really targeting the most relevant aspects of citizen science with our current evaluation approaches?
In recent years, citizen science has gained popularity not only in the scientific community but also with the general public. The potential it projects in fostering an open and participatory approach to science, decreasing the distance between science and society, and contributing to the wider goal of an inclusive society is being explored by scientists, science communicators, educators, policy makers and related stakeholders. The public's participation in citizen science projects is still often reduced to data gathering and data manipulation such as classification of data. However, the citizen science landscape is much broader and diverse, inter alia due to the participation opportunities offered by latest ICT. The emergence of new forms of collaboration and grassroots initiatives is currently being experienced. In an open consultation process that led to the "White Paper on Citizen Science for Europe", the support of a wide range of project types and innovative forms of participation in science was requested. In this paper we argue for mechanisms that encourage a variety of approaches, promote emerging and creative concepts and widen the perspectives for social innovation.
Air pollution is a serious problem that is causing increasing concern among European citizens. It is responsible for more than 400,000 premature deaths in Europe each year and considerably damages human health, agriculture, and the natural environment. Despite these facts, the readiness and power of citizens to take actions is limited. To address this challenge, the citizen science project CAPTOR was launched in 2016. Using low-cost measurement devices, citizens in three European testbeds supported the monitoring of tropospheric ozone. This paper presents the results from 53 interviews with involved residents and shows that the active involvement of individuals in a complex process such as measuring tropospheric ozone can have important impacts on their knowledge and attitudes. In an attempt to expand the benefits of low-cost air quality sensors from an individual to a regional level, certain preconditions are key. Strong support in assuring data quality, visibility of the collected data in online and offline media, broad dissemination of results, and intensified communication with political decision-makers are needed.
In today's knowledge-based society we are experiencing a rise in citizen science activities.Citizen science goals include enhancing scientific knowledge generation, contributing to societally relevant questions, fostering scientific literacy in society and transforming science communication. These aims, however, are rarely evaluated, and project managers as well as prospective funders are often at a loss when it comes to assessing and reviewing the quality and impact of citizen science activities. To ensure and improve the quality of citizen science outcomes evaluation methods are required for planning, self-evaluation and training development as well as for informing funding reviews and impact assessments. Here, based on an in-depth review of the characteristics and diversity of citizen science activities and current evaluation practices, we develop an open framework for evaluating diverse citizen science activities, ranging from projects initiated by grassroots initiatives to those led by academic scientists. The framework incorporates the social, the scientific and the socioecological/economic perspectives of citizen science and thus offers a comprehensive collection of indicators at a glance. Indicators on a process-and impact-level can be selected and prioritized from all three perspectives, according to the specific contexts and targets. The framework guides and fosters the critical assessment and enhancement of citizen science projects against these goals both for external funding reviews as well as for internal project development.
Citizen science has expanded rapidly over the past decades. Yet, defining citizen science and its boundaries remained a challenge, and this is reflected in the literature - for example in the proliferation of typologies and definitions. There is a need for identifying areas of agreement and disagreement within the citizen science practitioners community on what should be considered as citizen science activity. This paper describes the development and results of a survey that examined this issue, through the use of vignettes - short case descriptions that describe an activity, while asking the respondents to rate the activity on a scale from ‘not citizen science’ (0%) to ‘citizen science’ (100%). The survey included 50 vignettes, of which 5 were developed as clear cases of not-citizen science activities, 5 as widely accepted citizen science activities, and the others addressing 10 factors and 61 sub-factors that can lead to controversy about an activity. The survey has attracted 333 respondents, who provided over 5,100 ratings. The analysis demonstrates the plurality of understanding of what citizen science is and calls for an open understanding of what activities are included in the field.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.