2019
DOI: 10.1007/978-3-030-34770-3_5
|View full text |Cite
|
Sign up to set email alerts
|

An Innovative Online Tool to Self-evaluate and Compare Participatory Research Projects Labelled as Science Shops or Citizen Science

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
9
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(9 citation statements)
references
References 16 publications
0
9
0
Order By: Relevance
“…Similar to current efforts to build in interoperability across data systems and platforms of citizen science projects (Bowser 2017;Masó and Fritz 2019;Masó and Wehn 2020), cross-comparison of impacts and data impacts would be a beneficial development for citizen science. A comprehensive CSIAF can enable comparability of impact assessment results that are based on different methods and information sources using consistent overarching categories of definitions (Phillips et al 2012;Reed et al 2018;Gresle et al 2019). This could be done, for example, by capturing impact assessment results from different projects via a single online tool (e.g., questionnaire) (Gresle et al 2019) based on the CSIAF and, during the visualisation of individual and compared results, by distinguishing validity levels (e.g., via a color scheme) according to the range of underlying data sources.…”
Section: Principle 5: Fostering Comparison Of Impact Assessment Results Across Citizen Science Projectsmentioning
confidence: 99%
See 1 more Smart Citation
“…Similar to current efforts to build in interoperability across data systems and platforms of citizen science projects (Bowser 2017;Masó and Fritz 2019;Masó and Wehn 2020), cross-comparison of impacts and data impacts would be a beneficial development for citizen science. A comprehensive CSIAF can enable comparability of impact assessment results that are based on different methods and information sources using consistent overarching categories of definitions (Phillips et al 2012;Reed et al 2018;Gresle et al 2019). This could be done, for example, by capturing impact assessment results from different projects via a single online tool (e.g., questionnaire) (Gresle et al 2019) based on the CSIAF and, during the visualisation of individual and compared results, by distinguishing validity levels (e.g., via a color scheme) according to the range of underlying data sources.…”
Section: Principle 5: Fostering Comparison Of Impact Assessment Results Across Citizen Science Projectsmentioning
confidence: 99%
“…A comprehensive CSIAF can enable comparability of impact assessment results that are based on different methods and information sources using consistent overarching categories of definitions (Phillips et al 2012;Reed et al 2018;Gresle et al 2019). This could be done, for example, by capturing impact assessment results from different projects via a single online tool (e.g., questionnaire) (Gresle et al 2019) based on the CSIAF and, during the visualisation of individual and compared results, by distinguishing validity levels (e.g., via a color scheme) according to the range of underlying data sources. This can serve to generate both, project-specific as well as aggregated results.…”
Section: Principle 5: Fostering Comparison Of Impact Assessment Results Across Citizen Science Projectsmentioning
confidence: 99%
“…Prior work has been undertaken defining criteria, indicators, and methods for impact evaluation in CS and SS (Schlierf and Meyer 2013;Kieslinger et al 2018;Phillips et al 2018;Schaefer et al 2021). Yet, online evaluation tools that are capable of demonstrating the value of CS and SS remain limited and untested as regards their content validity (Gresle et al 2019). An additional drawback of these evaluations is that they often rely solely on scientific investigators' feedback and fail to include that of the other stakeholders involved in the project, which ultimately leads to a bias in the evaluation studies (Gresle et al 2019).…”
Section: Introductionmentioning
confidence: 99%
“…Yet, online evaluation tools that are capable of demonstrating the value of CS and SS remain limited and untested as regards their content validity (Gresle et al 2019). An additional drawback of these evaluations is that they often rely solely on scientific investigators' feedback and fail to include that of the other stakeholders involved in the project, which ultimately leads to a bias in the evaluation studies (Gresle et al 2019). It is clear that evaluations of participatory research need to be framed from a multidimensional and multi-stage perspective where the process itself is worth evaluating (Schaefer et al 2021).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation