Evaluation is a core management instrument and part of many scientific projects. Evaluation can be approached from several different angles, with distinct objectives in mind. In any project, we can evaluate the project process and the scientific outcomes, but with citizen science this does not go far enough. We need to additionally evaluate the effects of projects on the participants themselves and on society at large. While citizen science itself is still in evolution, we should aim to capture and understand the multiple traces it leaves in its direct and broader environment. Considering that projects often have limited resources for evaluation, we need to bundle existing knowledge and experiences on how to best assess citizen science initiatives and continually learn from this assessment. What should we concentrate on when we evaluate citizen science projects and programmes? What are current practices and what are we lacking? Are we really targeting the most relevant aspects of citizen science with our current evaluation approaches?
Citizen science has expanded rapidly over the past decades. Yet, defining citizen science and its boundaries remained a challenge, and this is reflected in the literature—for example in the proliferation of typologies and definitions. There is a need for identifying areas of agreement and disagreement within the citizen science practitioners community on what should be considered as citizen science activity. This paper describes the development and results of a survey that examined this issue, through the use of vignettes—short case descriptions that describe an activity, while asking the respondents to rate the activity on a scale from ‘not citizen science’ (0%) to ‘citizen science’ (100%). The survey included 50 vignettes, of which five were developed as clear cases of not-citizen science activities, five as widely accepted citizen science activities and the others addressing 10 factors and 61 sub-factors that can lead to controversy about an activity. The survey has attracted 333 respondents, who provided over 5100 ratings. The analysis demonstrates the plurality of understanding of what citizen science is and calls for an open understanding of what activities are included in the field.
Air pollution is a serious problem that is causing increasing concern among European citizens. It is responsible for more than 400,000 premature deaths in Europe each year and considerably damages human health, agriculture, and the natural environment. Despite these facts, the readiness and power of citizens to take actions is limited. To address this challenge, the citizen science project CAPTOR was launched in 2016. Using low-cost measurement devices, citizens in three European testbeds supported the monitoring of tropospheric ozone. This paper presents the results from 53 interviews with involved residents and shows that the active involvement of individuals in a complex process such as measuring tropospheric ozone can have important impacts on their knowledge and attitudes. In an attempt to expand the benefits of low-cost air quality sensors from an individual to a regional level, certain preconditions are key. Strong support in assuring data quality, visibility of the collected data in online and offline media, broad dissemination of results, and intensified communication with political decision-makers are needed.
Research has shown that providing participants with high-quality learning material is not sufficient to help them profit most from online education. The level of interaction among participants is another key determinant for learning outcomes. However, merely proposing interaction does not automatically lead to fruitful discussion and collaboration. Specifically, social presence and facilitation activities add value to online discussions. In Murphy's collaboration framework, social presence represents the basis of successful online collaboration from which more reflective discussions and coconstruction can evolve. In this paper, an adjusted version of this framework was applied in a workplace learning context. The content analysis of 1170 comments in an online course for careers practitioners of a public employment service showed that the extended framework generated deeper insights into the dynamics of online discussions. The results show that involvement in collaborative learning at the workplace was supported by a high social presence and influenced by course topic and tasks. Facilitation played an important role in creating a sympathetic sense of community and stimulating co-creation processes.
Please refer to published version for the most recent bibliographic citation information. If a published version is known of, the repository item page linked to above, will contain details on accessing it.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.