2019
DOI: 10.1007/978-3-030-22354-0_45
|View full text |Cite
|
Sign up to set email alerts
|

Video Classification Using Deep Autoencoder Network

Abstract: We present a deep learning framework for video classification applicable to face recognition and dynamic texture recognition. A Deep Autoencoder Network Template (DANT) is designed whose weights are initialized by conducting unsupervised pre-training in a layer-wise fashion using Gaussian Restricted Boltzmann Machines. In order to obtain a class specific network and fine tune the weights for each class, the pre-initialized DANT is trained for each class of video sequences, separately. A majority voting techniq… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 33 publications
0
1
0
Order By: Relevance
“…e wireless sensor network expands people's ability to collect data, connects the physical data of the objective world to the transmission network, and provides humans with the most direct, efficient and reliable data on the next-generation network. is section is mainly a basic introduction to wireless sensor networks, mainly introducing the architecture, key features, design, and applications of wireless sensor networks [15]. e sensor node is usually an embedded system.…”
Section: Eoretical Basismentioning
confidence: 99%
“…e wireless sensor network expands people's ability to collect data, connects the physical data of the objective world to the transmission network, and provides humans with the most direct, efficient and reliable data on the next-generation network. is section is mainly a basic introduction to wireless sensor networks, mainly introducing the architecture, key features, design, and applications of wireless sensor networks [15]. e sensor node is usually an embedded system.…”
Section: Eoretical Basismentioning
confidence: 99%