Proceedings of the 2015 ACM on International Conference on Multimodal Interaction 2015
DOI: 10.1145/2818346.2830593
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning for Emotion Recognition on Small Datasets using Transfer Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
239
0
1

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 503 publications
(266 citation statements)
references
References 21 publications
3
239
0
1
Order By: Relevance
“…A model of the well-known convolutional network VGG-16 [7] is given, the configuration of ten layers of which is shown in Fig. 1.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…A model of the well-known convolutional network VGG-16 [7] is given, the configuration of ten layers of which is shown in Fig. 1.…”
Section: Methodsmentioning
confidence: 99%
“…It is proposed to use the principles of neural gas and sparse coding for the training of the hierarchical extractor of visual features using the example of a multi-layered neural network VGG-16 [7,8]. At the same time, the efficiency evaluation of the extractor is supposed to be carried out based on the results of learning the information-extreme classifier with binary coding of observations.…”
Section: Introductionmentioning
confidence: 99%
“…Regarding initialization, in our experiments we trained the proposed deep architectures by either (i) randomly initializing the weight values, or (ii) using pre-trained weights from networks having been pre-trained on large databases, such as the ImageNet [6]. For the second approach we used transfer learning [17], especially of the convolutional and pooling part of the pre-trained networks. In more detail, we utilized the ResNet L50 and VGG-16 networks, which have been pre-trained for object detection tasks, along with VGG-Face, which has been pre-trained for face recognition tasks.…”
Section: The End-to-end Deep Neural Architecturesmentioning
confidence: 99%
“…-Transfer Learning: Transfer learning [6] is the main approach to avoid learning failure due to overfitting, when training complex CNNs with small amounts of (image) data. In transfer learning, we use networks previously trained with large image datasets (even of generic objects) and fine-tune the whole, or parts of them, using the small training datasets.…”
Section: System Implementation and Operational Phasementioning
confidence: 99%
“…Recent advances in machine learning and deep neural networks provided state-of-the-art performance in all significant signal processing tasks, being used in a large number of applications, ranging from healthcare and question answering systems, to human computer interaction, surveillance and defense [4][5][6]. Deep neural networks are also applied as end-to-endarchitectures which include different network types in their structure and are trained to analyse signals, images, text and other inputs [3,4].…”
Section: Introductionmentioning
confidence: 99%