2016
DOI: 10.1186/s12302-016-0073-x
|View full text |Cite|
|
Sign up to set email alerts
|

Criteria for Reporting and Evaluating ecotoxicity Data (CRED): comparison and perception of the Klimisch and CRED methods for evaluating reliability and relevance of ecotoxicity studies

Abstract: BackgroundThe regulatory evaluation of ecotoxicity studies for environmental risk and/or hazard assessment of chemicals is often performed using the method established by Klimisch and colleagues in 1997. The method was, at that time, an important step toward improved evaluation of study reliability, but lately it has been criticized for lack of detail and guidance, and for not ensuring sufficient consistency among risk assessors.Results A new evaluation method was thus developed: Criteria for Reporting and Eva… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
77
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 62 publications
(81 citation statements)
references
References 28 publications
(25 reference statements)
0
77
0
Order By: Relevance
“…Quality assurance criteria are common in analytical chemistry or ecotoxicology [113,114] but are less self-evident for monitoring of plastic debris which is a relatively young field of science [115].…”
Section: Data and Knowledge Gaps With Respect To Further Model Develomentioning
confidence: 99%
“…Quality assurance criteria are common in analytical chemistry or ecotoxicology [113,114] but are less self-evident for monitoring of plastic debris which is a relatively young field of science [115].…”
Section: Data and Knowledge Gaps With Respect To Further Model Develomentioning
confidence: 99%
“…It is clear that experts' evaluations of study reliability and relevance may vary, even when predefined evaluation criteria are used (Beronius & Ågerstrand, ). However, studies also show that the application of evaluation tools can contribute to reducing variability in evaluations (Kase, Korkaric, Werner, & Agerstrand, ). One reason for the variability observed in the expert assessment of the initial SciRAP method, and the apparent lack of correlation between evaluations and categorization into different reliability categories, may be that no specific guidance was provided to participants.…”
Section: Discussionmentioning
confidence: 99%
“…Several methods have been tested on case studies, either internally (Durda and Preziosi ; Breton et al ; Van Der Kraak et al ; Beasley et al ) or using an external “round‐robin” or ring test assessment. For example, the method presented by Hobbs et al () was tested using 2 studies and 23 participants, whereas a round‐robin test of the CRED method by Moermond et al () used 8 studies and 75 participants (Kase et al, ). A method that has been validated and tested for clarity of the guidance using ring tests with multiple users will likely provide more consistent results.…”
Section: Evaluation Methodsmentioning
confidence: 99%