2015
DOI: 10.1155/2015/818243
|View full text |Cite
|
Sign up to set email alerts
|

On Training Efficiency and Computational Costs of a Feed Forward Neural Network: A Review

Abstract: A comprehensive review on the problem of choosing a suitable activation function for the hidden layer of a feed forward neural network has been widely investigated. Since the nonlinear component of a neural network is the main contributor to the network mapping capabilities, the different choices that may lead to enhanced performances, in terms of training, generalization, or computational costs, are analyzed, both in general-purpose and in embedded computing environments. Finally, a strategy to convert a netw… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
33
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 68 publications
(35 citation statements)
references
References 61 publications
0
33
0
Order By: Relevance
“…Filliatre and Racca [20] have studied the PP for speech synthesis. Many such works have been presented in [21][22][23][24][25]. Hu et al [21] have used two distributions, namely Cauchy and laplace and one error function, called Gaussian to generate novel activation functions.…”
Section: Related Workmentioning
confidence: 99%
“…Filliatre and Racca [20] have studied the PP for speech synthesis. Many such works have been presented in [21][22][23][24][25]. Hu et al [21] have used two distributions, namely Cauchy and laplace and one error function, called Gaussian to generate novel activation functions.…”
Section: Related Workmentioning
confidence: 99%
“…Neural networks usually utilise the same, non-linear xed activation function for all neurons, but the activation function has a signi cant in uence on the learning performance, topology and tness [4,11,13]. Mayer and Schwaiger evolve the activation function of generalized multi-layer perceptrons [16] within the netGEN framework, which uses a genetic algorithm to evolve neural network topologies [9].…”
Section: Evolving Activation Functionsmentioning
confidence: 99%
“…[65] Proposes a new MBP centered discriminative feature learning approach that specifically aims at learning a low dimensional feature representation to maximize the global margin of the data and samples from the same class as close as possible. [66] Also recently proposes a performance evaluation of loss functions for speech recognition to enhance the generality of acoustic model by implementing an MBP A more recent work also looked at training efficiency and computational cost, both in general-purpose and in embedded computing environments and a strategy to convert a network configuration between different activation functions without altering the network mapping capabilities in FFNN were also presented in [67]. These literatures serve as a stimulant for this paper.…”
Section: Figure 7 Topological Structure Of Dpfnnmentioning
confidence: 99%