2017
DOI: 10.1016/j.toxlet.2017.07.175
|View full text |Cite
|
Sign up to set email alerts
|

DeepTox: Toxicity prediction using deep learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
207
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 228 publications
(279 citation statements)
references
References 0 publications
0
207
0
Order By: Relevance
“…To improve the generalization of the model and avoid overfitting, we applied an L2 regularization (regularization parameter = 1 × 10 −3 ) that penalized high values in the network's weights and facilitated diffuse weight vectors as solutions. To mitigate the network's internal covariate shift, the h 1 , z, and h 2 layers were formed using scaled exponential linear units (SELUs; Klambauer, Unterthiner, Mayr, & Hochreiter, ). The activation function of these units allows for faster and more robust training, that is, less training epochs to reach convergence, and a strong regularization scheme (Klambauer et al, ).…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…To improve the generalization of the model and avoid overfitting, we applied an L2 regularization (regularization parameter = 1 × 10 −3 ) that penalized high values in the network's weights and facilitated diffuse weight vectors as solutions. To mitigate the network's internal covariate shift, the h 1 , z, and h 2 layers were formed using scaled exponential linear units (SELUs; Klambauer, Unterthiner, Mayr, & Hochreiter, ). The activation function of these units allows for faster and more robust training, that is, less training epochs to reach convergence, and a strong regularization scheme (Klambauer et al, ).…”
Section: Methodsmentioning
confidence: 99%
“…To mitigate the network's internal covariate shift, the h 1 , z, and h 2 layers were formed using scaled exponential linear units (SELUs; Klambauer, Unterthiner, Mayr, & Hochreiter, ). The activation function of these units allows for faster and more robust training, that is, less training epochs to reach convergence, and a strong regularization scheme (Klambauer et al, ). We initialized the SELU units using the appropriated initializer (Klambauer et al, ).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The activation operations should provide dierent types of nonlinearities in the neural networks to solve Multiclass Classication problems. In general, there are two types of activation functions, including smooth nonlinear functions, such as Sigmoid, Tanh, Exponential Linear Units (ELU) [13], Scaled Exponential Linear Units (SELU) [14], etc., and smooth continuous functions, such as Rectied linear unit (ReLU) [15], Concatenated Rectied Linear Units (CReLu) [16], etc. We nd that for classifying synthesis ows, the activation functions with nonlinearities perform better, such SELU and Tanh.…”
Section: Cnn Architecture and Trainingmentioning
confidence: 99%
“…Figure 7 includes the comparison of eight dierent activation functions, including ReLU, ReLU6, ELU [13], SELU [14], Softplus, Softsign, Sigmoid and Tanh. We can see that the ELU, SELU, Softsign and Tanh functions outperform the others, and SELU oers the best accuracy for generating delay-driven ows for the 128-bit AES core.…”
Section: Evaluation Of Activation Functionsmentioning
confidence: 99%