2015 Fourth International Conference on Information Science and Industrial Applications (ISI) 2015
DOI: 10.1109/isi.2015.9
|View full text |Cite
|
Sign up to set email alerts
|

A New Multi-layer Perceptrons Trainer Based on Ant Lion Optimization Algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
6
4

Relationship

1
9

Authors

Journals

citations
Cited by 56 publications
(23 citation statements)
references
References 13 publications
0
21
0
Order By: Relevance
“…In this section, four experiments were conducted. The first experiment (in Section 4.1 ) has four goals (1) Testing the proposed NRCS model for predicting the toxicity of the biotransformed drugs, (2) Testing the power of this system to deal with uncertain data without using any feature selection or any pre-processing method, (3) Comparing NRCS with conventional classifiers such as Multi-Layer Perceptron (MLP) ( Yamany et al, 2015 ), k-Nearest Neighbors ( k -NN) ( Tharwat, Mahdi, Elhoseny, & Hassanien, 2018 ), and Linear Discriminant Analysis (LDA) ( Tharwat, 2016 ) classifiers, and finally, (4) Comparing the proposed models (NRCS and GNRCS). Different runs were conducted for finding the optimal or near optimal parameters for k -NN and MLP classifiers.…”
Section: Resultsmentioning
confidence: 99%
“…In this section, four experiments were conducted. The first experiment (in Section 4.1 ) has four goals (1) Testing the proposed NRCS model for predicting the toxicity of the biotransformed drugs, (2) Testing the power of this system to deal with uncertain data without using any feature selection or any pre-processing method, (3) Comparing NRCS with conventional classifiers such as Multi-Layer Perceptron (MLP) ( Yamany et al, 2015 ), k-Nearest Neighbors ( k -NN) ( Tharwat, Mahdi, Elhoseny, & Hassanien, 2018 ), and Linear Discriminant Analysis (LDA) ( Tharwat, 2016 ) classifiers, and finally, (4) Comparing the proposed models (NRCS and GNRCS). Different runs were conducted for finding the optimal or near optimal parameters for k -NN and MLP classifiers.…”
Section: Resultsmentioning
confidence: 99%
“…To determine the weights and biases of the Multi-layer Perceptrons (MLP), Yamany et al [171] used ALO to train MLP trying to obtain the highest classification rate and the lowest error value. Heidari et al in [172] introduced a hybrid technique called ALOMLP which is based on a hybrid training methods ALO and Multi-Layer-Perceptrons (MLPs).…”
Section: ) Neural Networkmentioning
confidence: 99%
“…The neuron's input is the sum of weights on its inputs. The equation 10 (11) At last output is classified according to calculated nodes of hidden output, which is illustrated as equation (12) and (13), [38]. Here, the ALO algorithm is used as a trainer to parameters of ANN.…”
Section: Training Process Of Ann With Alo Step 1: Read In Input and Omentioning
confidence: 99%