2017
DOI: 10.1007/s11569-017-0307-4
|View full text |Cite
|
Sign up to set email alerts
|

Making Nanomaterials Safer by Design?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
30
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 46 publications
(31 citation statements)
references
References 10 publications
1
30
0
Order By: Relevance
“…In the context of regulation, it is difficult to implement this approach as current testing paradigms identify risk late into the production process, slowing down innovation and increasing costs. To address this, one of the proposed approaches, safe(r)-by-design (SbD), aims to incorporate hazard assessment into the design process of novel MNMs (Schwarz-plaschg et al, 2017). The application of SbD to MNMs was developed in the European FP7 projects NANoREG and Prosafe and is being expanded on in the European Horizon 2020 project NanoReg2.…”
Section: Introductionmentioning
confidence: 99%
“…In the context of regulation, it is difficult to implement this approach as current testing paradigms identify risk late into the production process, slowing down innovation and increasing costs. To address this, one of the proposed approaches, safe(r)-by-design (SbD), aims to incorporate hazard assessment into the design process of novel MNMs (Schwarz-plaschg et al, 2017). The application of SbD to MNMs was developed in the European FP7 projects NANoREG and Prosafe and is being expanded on in the European Horizon 2020 project NanoReg2.…”
Section: Introductionmentioning
confidence: 99%
“…If two regression models have similar RMSE, F-values (the ratio between explained and unexplained variance) and P-values (the probability of finding the observed or more extreme results) can help determine the model of choice [22,129]. Robustness metrics such as squared cross validated correlation coefficient (Q 2 ), leave-one-out cross-validation coefficient (Q 2 LOO ), and leave-many-out cross-validation coefficients (Q 2 LMO−10% and Q 2 LMO−25% ) are popular robustness indicators [46,47]. To avoid the possibility of overestimation by using only leave-one-out cross validation, a bootstrap procedure (Q 2 Boot ) is suggested [23] and is mainly suitable for a limited number of training cases [50].…”
Section: Robustnessmentioning
confidence: 99%
“…The leave-many-out approach remove a different number of values from the data set (10%, 20%, 25%, or 50%), depending on the size of the dataset even though there is no rule-of-thumb as to the percentages one should apply for cross validation or data split. Besides Q 2 LOO , the root-mean square error of cross-validation (R 2 CV ) can be calculated [38,94]. The minimum criteria for a successful QSAR model is R 2 ≥ 0.6 and Q 2 LMO of ≥ 0.5 [84], whereas training and the test set R 2 value difference should not exceed 0.3 [56].…”
Section: Robustnessmentioning
confidence: 99%
See 1 more Smart Citation
“…They aim at reducing adverse effects on human health and the environment by altering nanoproduct design (Soeteman-Hernandez et al, 2019) and by ensuring safety along its lifecycle (Bottero et al, 2017;Kraegeloh et al, 2018). The SbD concept is therefore different from conventional risk assessment approaches, which only consider safety when the product is already fully developed (Schwarz-Plaschg et al, 2017).…”
Section: Introductionmentioning
confidence: 99%