2019
DOI: 10.1016/j.knosys.2018.10.025
|View full text |Cite
|
Sign up to set email alerts
|

Parallel computing method of deep belief networks and its application to traffic flow prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 97 publications
(20 citation statements)
references
References 25 publications
0
20
0
Order By: Relevance
“…Recently, parallel computing has been applied in the field of deep learning. Zhao et al [35] proposed a parallel computing method of deep belief networks for traffic flow prediction. The data features were learnt by multiple computing nodes by a master-slave parallel computing structure.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, parallel computing has been applied in the field of deep learning. Zhao et al [35] proposed a parallel computing method of deep belief networks for traffic flow prediction. The data features were learnt by multiple computing nodes by a master-slave parallel computing structure.…”
Section: Related Workmentioning
confidence: 99%
“…It should be pointed out that the local minimum poses a huge threat to the learning ability, which causes not only low efficiency but also low accuracy for nonlinear system modeling. Zhao et al proposed a master-slave parallel computing method for the DBN learning process, to reduce the time consumption of pre-training and fine-tuning [17]. Ahn et al proposed a virtual shared memory framework for the DNN learning process, called Soft Memory Box (SMB), which can share the memory of remote node among distributed processes in the nodes, thereby improving communication efficiency by parameter sharing [18].…”
Section: Proposed a Novel Learning Algorithm For Dynamic Fuzzy Neuralmentioning
confidence: 99%
“…Furthermore, CNN has a special weight sharing, deeper network can be formulated for better performance in the application of better complex visual task. Weight sharing decreases the quantity of adjustable parameters, thus increasing the training speed and reducing the threat of overfitting [10]. The formula is a feed forward neural network which is consist of three different layers: input layer, hidden layer and the output layer.…”
Section: Convolutional Neural Networkmentioning
confidence: 99%
“…The neuron is a computational sum of inputs and off-set of 1. The output will be [10] which the multiplication sum operation between weight and output. After the equation, the bias is added to calculate for the non-linear transformation.…”
Section: Convolutional Neural Networkmentioning
confidence: 99%