2018
DOI: 10.1109/access.2018.2871349
|View full text |Cite
|
Sign up to set email alerts
|

Domain Specific Learning for Sentiment Classification and Activity Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“…Ma M et.al [33] proposed a twin stream network architecture and jointly fine-tuned the two networks to recognize objects, actions, and activities. Wang H B et.al [52] verified that fine-tuning and regular constraints can increase the training efficiency, and fine-tuning is valid in practical HAR applications.…”
Section: Transfer Learning and Domain Adaptationmentioning
confidence: 96%
“…Ma M et.al [33] proposed a twin stream network architecture and jointly fine-tuned the two networks to recognize objects, actions, and activities. Wang H B et.al [52] verified that fine-tuning and regular constraints can increase the training efficiency, and fine-tuning is valid in practical HAR applications.…”
Section: Transfer Learning and Domain Adaptationmentioning
confidence: 96%
“…This machine is stacked with RBM, which has powerful feature extraction capabilities. The restricted Boltzmann machine [28][29] is a Markov random field model, which has a 2-layer structure, as shown in Figure 1. The lower layer is the input layer, containing input units , used to represent input data, and each input unit contains a real-valued offset ; the upper layer is the hidden layer, containing hidden units ℎ , which represent the input extracted by RBM abstract feature of data, each hidden unit contains a real-valued bias .…”
Section: A Deep Belief Networkmentioning
confidence: 99%
“…In order to highlight the powerful non-linear fitting ability, Since the amount of source training data is relatively large relative to the amount of task data, and the amount of data to be calculated is small, in order to test whether the amount of task data affects the migration result when the source training data is migrated based on the maximum mean difference contribution coefficient method, auxiliary sample data is introduced and set are 10 auxiliary sample batches, and the data of August 22, August (22)(23), August (22)(23)(24),..., August (22)(23)(24)(25)(26)(27)(28)(29)(30)(31) are taken as 10 sample data, the number of samples is 96,192,...,960 in sequence. Then take the data under different auxiliary samples as the target data, migrate data close to the target data distribution from the source data, calculate the MMD value of each auxiliary sample, the source data, and the migrated data respectively, and use the migration data of each auxiliary sample The TDBN-DNN model obtained after finetuning the network calculates the target task data, and the Gaussian kernel width control parameter = 2.…”
Section: Comparison Of Dbn-dnn and Tdbn-dnn Algorithmsmentioning
confidence: 99%