2018
DOI: 10.30880/jst.2018.10.01.004
|View full text |Cite
|
Sign up to set email alerts
|

Performance Evaluation of Feed Forward Neural Network for Image Classification

Abstract: Artificial Neural Networks (ANNs) are one of the most comprehensive tools for classification. In this study, the performance of Feed Forward Neural Network (FFNN) with back-propagation algorithm is used to find out the appropriate activation function in the hidden layer using MATLAB 2013a. Random data has been generated and fetched to FFNN for testing the classification performance of this network. From the values of MSE, response graph and regression coefficients, it is clear that Tan sigmoid activation funct… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 14 publications
0
5
0
Order By: Relevance
“…MSE is the sum of squared distances between the observed and predicted values, which is the most commonly used loss function ( 27 ). Ullah et al ( 28 ) used the tan-sigmoid as the activation function. They also claimed that MSE might be the best parameter to find the best activation function ( 28 ).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…MSE is the sum of squared distances between the observed and predicted values, which is the most commonly used loss function ( 27 ). Ullah et al ( 28 ) used the tan-sigmoid as the activation function. They also claimed that MSE might be the best parameter to find the best activation function ( 28 ).…”
Section: Methodsmentioning
confidence: 99%
“…Ullah et al ( 28 ) used the tan-sigmoid as the activation function. They also claimed that MSE might be the best parameter to find the best activation function ( 28 ).…”
Section: Methodsmentioning
confidence: 99%
“…It is one of the widely spread networks that was called the front of the algorithm for the process of the algorithm in one direction only (from input to output) so that the weights of the subsequent units are not updated, the number of layers in it is not restricted [11] Let's assume we have the input the variables (X1, X2,X3, X4). ci = fi(wixi+bi) i=1,2,3,.... .....( 1…”
Section: Feed Forward Neural Network (Ffnn)mentioning
confidence: 99%
“…( 9) by taking partial derivation to update the weights as in Equation No ( . 5 ) When the least achievable error is reached, weights are adopted as parameters to the next units, and so on [11,16]. The number of hidden nodes is not restricted here and depends on the quality of the calculated processing.…”
Section: Recurrent Neural Network (Rnn)mentioning
confidence: 99%
“…The information is presented as activation values, where each node is assigned a number, such that the higher the number, the higher the activation is. In a case of a feedforward network, the information is then transmitted forward throughout the network [11,12]. The activation value is transmitted from node to node but weighted in a certain manner.…”
Section: mentioning
confidence: 99%