The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2021
DOI: 10.3390/a14050158
|View full text |Cite
|
Sign up to set email alerts
|

A New Cascade-Correlation Growing Deep Learning Neural Network Algorithm

Abstract: In this paper, a proposed algorithm that dynamically changes the neural network structure is presented. The structure is changed based on some features in the cascade correlation algorithm. Cascade correlation is an important algorithm that is used to solve the actual problem by artificial neural networks as a new architecture and supervised learning algorithm. This process optimizes the architectures of the network which intends to accelerate the learning process and produce better performance in generalizati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 19 publications
(13 citation statements)
references
References 29 publications
0
12
0
1
Order By: Relevance
“…) is a method proposed by [1], which uses the growing phase at "Growing Pruning Deep Neural Network Algorithm" proposed by [6], however by training the latest hidden unit (a candidate unit) when connecting to an existing model, and after that the weight of the hidden inputs are frozen [1].…”
Section: Cascade-correlation Growing Deep Learning Neural Network (Cc...mentioning
confidence: 99%
“…) is a method proposed by [1], which uses the growing phase at "Growing Pruning Deep Neural Network Algorithm" proposed by [6], however by training the latest hidden unit (a candidate unit) when connecting to an existing model, and after that the weight of the hidden inputs are frozen [1].…”
Section: Cascade-correlation Growing Deep Learning Neural Network (Cc...mentioning
confidence: 99%
“…The cost function of the second layer uses as input vector, the output of the previous layer which is the activated vector a (1) .…”
Section: Figure Architecture Of a Neural Network With One Hidden Neuronmentioning
confidence: 99%
“…In fact, the first assumption remains valid because a (1) ∈[0,1] and the gradient of the cost function ∂L/∂w can be calculated with the same previous formulae.…”
Section: Figure Architecture Of a Neural Network With One Hidden Neuronmentioning
confidence: 99%
See 2 more Smart Citations