2020 IEEE Winter Conference on Applications of Computer Vision (WACV) 2020
DOI: 10.1109/wacv45572.2020.9093378
|View full text |Cite
|
Sign up to set email alerts
|

Cooperative Initialization based Deep Neural Network Training

Abstract: Researchers have proposed various activation functions. These activation functions help the deep network to learn non-linear behavior with a significant effect on training dynamics and task performance. The performance of these activations also depends on the initial state of the weight parameters, i.e., different initial state leads to a difference in the performance of a network. In this paper, we have proposed a cooperative initialization for training the deep network using ReLU activation function to impro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1
1

Relationship

2
4

Authors

Journals

citations
Cited by 11 publications
(1 citation statement)
references
References 15 publications
0
1
0
Order By: Relevance
“…Lip-reading also has crucial applications in helping speech-impaired people, such as hearing aids and generation of voice for people who communicate using lip movements only as they are unable to generate voice(aphonia). The advent of deep neural network models [11]- [15] and the availability of huge labeled datasets have helped in achieving a significant improvement in the task of the lip to text generation to solve a great variety of applications [6], [16]. These datasets consist of a large vocabulary in an unconstrained environment with thousands of speakers.…”
Section: Introductionmentioning
confidence: 99%
“…Lip-reading also has crucial applications in helping speech-impaired people, such as hearing aids and generation of voice for people who communicate using lip movements only as they are unable to generate voice(aphonia). The advent of deep neural network models [11]- [15] and the availability of huge labeled datasets have helped in achieving a significant improvement in the task of the lip to text generation to solve a great variety of applications [6], [16]. These datasets consist of a large vocabulary in an unconstrained environment with thousands of speakers.…”
Section: Introductionmentioning
confidence: 99%