2019
DOI: 10.1007/978-3-030-30484-3_30
|View full text |Cite
|
Sign up to set email alerts
|

Residual Learning for FC Kernels of Convolutional Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 8 publications
0
2
0
Order By: Relevance
“…For example, when a raw image is processed through a fully connected neural network, the network has to treat each pixel as an individual input and learn to extract relevant features from all locations within the image. In contrast, a convolutional neural network (CNN) [45] can learn to recognize patterns in an image regardless of where they are located, using shared weights across the entire image and reducing the number of parameters required. By design, CNNs learn hierarchical representations of the raw input data and, due to the shown efficiency of this approach, this is the most common approach for the representation of visual data.…”
Section: Audio-visual Emotion Recognitionmentioning
confidence: 99%
“…For example, when a raw image is processed through a fully connected neural network, the network has to treat each pixel as an individual input and learn to extract relevant features from all locations within the image. In contrast, a convolutional neural network (CNN) [45] can learn to recognize patterns in an image regardless of where they are located, using shared weights across the entire image and reducing the number of parameters required. By design, CNNs learn hierarchical representations of the raw input data and, due to the shown efficiency of this approach, this is the most common approach for the representation of visual data.…”
Section: Audio-visual Emotion Recognitionmentioning
confidence: 99%
“…The suggested architecture contains non-linear classifiers type NiN [23][24][25], more specifically the solution a3net [26], which follows a particular approach to identitying matrix initialization for building a deep network [27]. The initialization of the identity matrix occurs at the start of the process with the weight (including the weights for the unitary diagonal) changing during the training process.…”
Section: Amr Detectormentioning
confidence: 99%
“…The main diagonal is initialized as w 1ii = λ for each i, where λ -> 1, as in [27]. This allows building efficiently trained deep neural network architectures (see also Section 2.1).…”
Section: Characteristics Of the Networkmentioning
confidence: 99%