2022
DOI: 10.1016/j.comtox.2022.100244
|View full text |Cite
|
Sign up to set email alerts
|

Ensemble super learner based genotoxicity prediction of multi-walled carbon nanotubes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 17 publications
0
1
0
Order By: Relevance
“…The aggregation of learners evolved from the stacked generalization model [16]. Further experimentation demonstrates the capability of stacking predictors for meta-learning [6], [14], [15], [17]- [24], with variations extending eSL functions for a specific set of tasks. eSL solves some of the bottlenecks common with individual models, such as an expectation space that is overly large for the quantity of available training data, an analytical challenge which guarantees a global optimum, and an individual model that lacks a well-defined approximation for model distribution outcomes.…”
Section: E(y |X)mentioning
confidence: 99%
See 3 more Smart Citations
“…The aggregation of learners evolved from the stacked generalization model [16]. Further experimentation demonstrates the capability of stacking predictors for meta-learning [6], [14], [15], [17]- [24], with variations extending eSL functions for a specific set of tasks. eSL solves some of the bottlenecks common with individual models, such as an expectation space that is overly large for the quantity of available training data, an analytical challenge which guarantees a global optimum, and an individual model that lacks a well-defined approximation for model distribution outcomes.…”
Section: E(y |X)mentioning
confidence: 99%
“…eSL solves some of the bottlenecks common with individual models, such as an expectation space that is overly large for the quantity of available training data, an analytical challenge which guarantees a global optimum, and an individual model that lacks a well-defined approximation for model distribution outcomes. This study focuses on stacked eSL for load-shedding task [6], [17]. Details of the schema are established in section II.…”
Section: E(y |X)mentioning
confidence: 99%
See 2 more Smart Citations
“…Performance metrics of efficient DenseNet model Table15, the proposed efficient DenseNet model performs well in unique feature extraction for accurate classification of the severity levels of DR, and it has enhanced the efficacy of DR screening. Moreover, the computational complexity[24][25][26][27][28][29] has been reduced compared with the baseline models. The metrics such as precision, recall, and F1 score are used to monitor the grading of DR by the efficient DenseNet model as depicted in Table16, along with the trainable parameters in Table17.…”
mentioning
confidence: 99%