2020
DOI: 10.1109/tnnls.2019.2952864
|View full text |Cite
|
Sign up to set email alerts
|

An Adaptive Deep Belief Network With Sparse Restricted Boltzmann Machines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 52 publications
(19 citation statements)
references
References 41 publications
0
16
0
Order By: Relevance
“…This machine is stacked with RBM, which has powerful feature extraction capabilities. The restricted Boltzmann machine [28][29] is a Markov random field model, which has a 2-layer structure, as shown in Figure 1. The lower layer is the input layer, containing input units , used to represent input data, and each input unit contains a real-valued offset ; the upper layer is the hidden layer, containing hidden units ℎ , which represent the input extracted by RBM abstract feature of data, each hidden unit contains a real-valued bias .…”
Section: A Deep Belief Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…This machine is stacked with RBM, which has powerful feature extraction capabilities. The restricted Boltzmann machine [28][29] is a Markov random field model, which has a 2-layer structure, as shown in Figure 1. The lower layer is the input layer, containing input units , used to represent input data, and each input unit contains a real-valued offset ; the upper layer is the hidden layer, containing hidden units ℎ , which represent the input extracted by RBM abstract feature of data, each hidden unit contains a real-valued bias .…”
Section: A Deep Belief Networkmentioning
confidence: 99%
“…In order to highlight the powerful non-linear fitting ability, Since the amount of source training data is relatively large relative to the amount of task data, and the amount of data to be calculated is small, in order to test whether the amount of task data affects the migration result when the source training data is migrated based on the maximum mean difference contribution coefficient method, auxiliary sample data is introduced and set are 10 auxiliary sample batches, and the data of August 22, August (22)(23), August (22)(23)(24),..., August (22)(23)(24)(25)(26)(27)(28)(29)(30)(31) are taken as 10 sample data, the number of samples is 96,192,...,960 in sequence. Then take the data under different auxiliary samples as the target data, migrate data close to the target data distribution from the source data, calculate the MMD value of each auxiliary sample, the source data, and the migrated data respectively, and use the migration data of each auxiliary sample The TDBN-DNN model obtained after finetuning the network calculates the target task data, and the Gaussian kernel width control parameter = 2.…”
Section: Comparison Of Dbn-dnn and Tdbn-dnn Algorithmsmentioning
confidence: 99%
“…IDBN-WSVM includes a range of superimposed RBMs [28] and a WSVM layer ( Figure 2). The training process of IDBN-WSVM includes two steps.…”
Section: Proposed Models and Processes A Construction And Trainmentioning
confidence: 99%
“…Robustness and adaptability are important indicators to measure the pros and cons of algorithms [11], [12]. Existing map inference algorithms with trajectories have been able to construct relatively complete road networks after researching for a decade.…”
Section: Introductionmentioning
confidence: 99%