Abstract:In this paper, a proposed algorithm that dynamically changes the neural network structure is presented. The structure is changed based on some features in the cascade correlation algorithm. Cascade correlation is an important algorithm that is used to solve the actual problem by artificial neural networks as a new architecture and supervised learning algorithm. This process optimizes the architectures of the network which intends to accelerate the learning process and produce better performance in generalizati… Show more
“…) is a method proposed by [1], which uses the growing phase at "Growing Pruning Deep Neural Network Algorithm" proposed by [6], however by training the latest hidden unit (a candidate unit) when connecting to an existing model, and after that the weight of the hidden inputs are frozen [1].…”
Section: Cascade-correlation Growing Deep Learning Neural Network (Cc...mentioning
Straight Forward Constructive Deep Learning Neural Network (SFC-DLNN) algorithm is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology, SFC-DLNN begins with a minimal network (perceptron), then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, the input-side weights of the new architecture are generated. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating others, more complex feature space is then created where the data is likely to be linearly separable. The SFC-DLNN architecture has several advantages over existing algorithms: it learns quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes.We obtain from our built model (SFC-DLNN) an accuracy and specificity of 83:5% from a simulated data set using the uniform distribution. This is not the best but is enough to approve the model prediction capacity.
“…) is a method proposed by [1], which uses the growing phase at "Growing Pruning Deep Neural Network Algorithm" proposed by [6], however by training the latest hidden unit (a candidate unit) when connecting to an existing model, and after that the weight of the hidden inputs are frozen [1].…”
Section: Cascade-correlation Growing Deep Learning Neural Network (Cc...mentioning
Straight Forward Constructive Deep Learning Neural Network (SFC-DLNN) algorithm is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology, SFC-DLNN begins with a minimal network (perceptron), then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, the input-side weights of the new architecture are generated. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating others, more complex feature space is then created where the data is likely to be linearly separable. The SFC-DLNN architecture has several advantages over existing algorithms: it learns quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes.We obtain from our built model (SFC-DLNN) an accuracy and specificity of 83:5% from a simulated data set using the uniform distribution. This is not the best but is enough to approve the model prediction capacity.
“…The cost function of the second layer uses as input vector, the output of the previous layer which is the activated vector a (1) .…”
Section: Figure Architecture Of a Neural Network With One Hidden Neuronmentioning
confidence: 99%
“…In fact, the first assumption remains valid because a (1) ∈[0,1] and the gradient of the cost function ∂L/∂w can be calculated with the same previous formulae.…”
Section: Figure Architecture Of a Neural Network With One Hidden Neuronmentioning
confidence: 99%
“…Irregular initialization has an impact on the convergence process because it can slow down or even completely stall the convergence process. Therefore, network initialization is one of the major problems faced by DNNs [1][2][3][4]. The following problem is the difficulty of discovering a suitable network architecture that can give good accuracy and better success in generalization.…”
Section: Introductionmentioning
confidence: 99%
“…Cascade-Correlation Growing Deep Learning Neural Network (CCG-DLNN) is a method proposed by, which uses the growing phase at the "Growing Pruning Deep Neural Network Algorithm" proposed by, however by training the latest hidden unit (a candidate unit) when connecting to an existing model, and after that, the weight of the hidden inputs are frozen [1,6].…”
Straight Forward Constructive Deep Learning Neural Network (SFC-DLNN) algorithm is a new architecture-based algorithm for artificial neural networks. Rather than simply adjusting the weights in a fixed topology network, SFC-DLNN starts with a minimal topology (perceptron), then builds up their network by gradually trains and adds new nodes one by one, creating multiple layers' network. Once a unit has been added to the network, the weights of the new architecture are generated. This unit then stands as a permanent detector of features in the network, and a more complex feature space is then created where the data is likely to be linearly separable. The SFC-DLNN algorithm has many advantages over existing ones: it has good learning speed, the network determines its topology size, and the structures it has built is retained after the training stage.We obtain from our built model (SFC-DLNN) an accuracy and specificity of 83:5% from a simulated data set using the uniform distribution. This is not the best but is enough to approve the model prediction capacity.
Integrating network expansion planning into electric vehicle (EV) smart charging solutions involves designing scalable infrastructure to accommodate the growing demand for electric mobility while considering grid capacity and energy distribution efficiency. This paper proposes a hybrid approach for EV smart charging with network expansion planning. The hybrid technique is the joint execution of the coati optimization algorithm (COA) and cascade‐correlation‐growing deep learning neural network, commonly known as the COA‐CCG‐DLNN technique. The objective of the proposed method is to minimize the cost of charging EVs, and it forecasts the best course of action. EV charging with network expansion is based on vehicle‐to‐building (V2B), vehicle‐to‐grid (V2G), and grid‐to‐vehicle (G2V). The COA approach is used to minimize the cost of EV charging and the CCG‐DLNN approach is used to predict the optimal solution for the system. The proposed method is executed on the MATLAB platform and is compared with existing techniques like particle swarm optimization (PSO), heap‐based optimization (HBO), and wild horse optimization (WHO). The proposed method achieves a low cost of $1.33 and a high accuracy of 99.5% compared with other existing techniques. The performance metrics for the proposed method include 2996.348 as the best result, 3000.100 as the mean, 3001.261 as the worst, and a standard deviation of 1.160348, along with a median of 2998.816, all of which outperform other existing methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.