2018
DOI: 10.1016/j.neucom.2017.04.081
|View full text |Cite
|
Sign up to set email alerts
|

B 2 FSE framework for high dimensional imbalanced data: A case study for drug toxicity prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 23 publications
(13 citation statements)
references
References 36 publications
0
13
0
Order By: Relevance
“…Note that the dataset is common in both the criteria, giving us a total of 11 datasets. We choose these two categories because they are of special interest in research related to imbalanced datasets and have received extensive attention in this research area (Anand et al 2010;Hooda et al 2018;Jing et al 2019;Blagus and Lusa 2013).…”
Section: Datasets Used For Validationmentioning
confidence: 99%
“…Note that the dataset is common in both the criteria, giving us a total of 11 datasets. We choose these two categories because they are of special interest in research related to imbalanced datasets and have received extensive attention in this research area (Anand et al 2010;Hooda et al 2018;Jing et al 2019;Blagus and Lusa 2013).…”
Section: Datasets Used For Validationmentioning
confidence: 99%
“…The main function of class balancing is to balance the class symmetry of instances. There are several conventional approaches to handle the class imbalance problem, which are undersampling, oversampling, and the synthetic minority oversampling technique (SMOTE) [17,18]. Here, the class imbalance problem is resolved by the ensemble learning method, as ensemble learning is more effective AATSC1i 49 nwHBa 79 minaaCH 109 ETA_Eta_R 139 GGI4 20 MATS1v 50 nHsNH2 80 mindssC 110 ETA_Eta_F 140 SpMax_D 21 MATS1p 51 nHdsCH 81 minaasC 111 ETA_Eta_F_L 141 SpDiam_D 22 MATS1i 52 nHaaCH 82 mindsN 112 FMF 142 SpAD_D 23 GATS1m 53 ndsCH 83 mindS 113 nHBDon_Lipinski 143 SpMAD_D 24 GATS1v 54 naaCH 84 maxwHBa 114 HybRatio 144 EE_D 25 GATS1p 55 ndssC 85 maxHdsCH 115 MIC4 145 VE1_D 26 GATS1i 56 naasC 86 maxHaaCH 116 MIC5 146 TopoPSA 27 nBondsS3 57 nsNH2 87 maxdsCH 117 nAtomP 147 AMW 28 nBondsD 58 ndsN 88 maxaaCH than data sampling techniques to enhance the classification performance of imbalanced data.…”
Section: S No Name Descriptionmentioning
confidence: 99%
“…If n is the number of records and d is the depth of the tree, then the time complexity of the random forest algorithm is O(ntree *mtry *d *n) and the space complexity of random forest algorithm is O(n*d). Therefore, we can say that the random forest model depends on the depth and size of the decision tree [17].…”
Section: Random Forest Modelmentioning
confidence: 99%
“…These metrics performed on different classifiers like Bayes Net (BN), Naive Bayes (NB), Logistic Regression (LR), SVM/SMO, Random Forest (RF), Adaboost, Adabag, and J48 [2]. In all these classifiers, it can be observed that Random Forest gives the highest accuracy and Adaboost has the lowest, which is 71%.…”
Section: Performance Evaluationmentioning
confidence: 99%
“…Artificial intelligence and machine learning noble techniques have helped many researchers in finding cost effective solution in diverse domains like drugs discovery, audits, etc. [2][3][4]. By using Artificial Intelligence in drug discovery, it increases the drugs market rapidly.…”
Section: Introductionmentioning
confidence: 99%