2017 30th SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T) 2017
DOI: 10.1109/sibgrapi-t.2017.12
|View full text |Cite
|
Sign up to set email alerts
|

Everything You Wanted to Know about Deep Learning for Computer Vision but Were Afraid to Ask

Abstract: Abstract-Deep Learning methods are currently the stateof-

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
54
0
25

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 106 publications
(79 citation statements)
references
References 69 publications
(100 reference statements)
0
54
0
25
Order By: Relevance
“…computed for all training examples be the mean squared error, then the undercomplete AE is able to learn the same subspace as the PCA (Principal Component Analysis), i.e. the principal component subspace of the training data [2]. Because of this type of behaviour AEs were often employed for dimensionality reduction.…”
Section: Encodermentioning
confidence: 99%
See 2 more Smart Citations
“…computed for all training examples be the mean squared error, then the undercomplete AE is able to learn the same subspace as the PCA (Principal Component Analysis), i.e. the principal component subspace of the training data [2]. Because of this type of behaviour AEs were often employed for dimensionality reduction.…”
Section: Encodermentioning
confidence: 99%
“…For image data, Convolutional Neural Networks (CNNs) with multiple layers were found to be particularly adequate. After being trained for image classification tasks, those network models were shown to be good extractors of low-level (shapes, colour blobs and edges) at the initial layers, and highlevel features (textures and semantics) at deeper layers [2]. However, deep networks are difficult to train from scratch, requiring a large number of annotated examples in order to ensure learning, due to their high shattering coefficient [3].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In both scenarios, TL has the potential to be applied to various problems, such as for example traffic control , facial attribute classification (ZHUANG et al, 2018), video classification (WU et al, 2015), and anomaly detection in surveillance videos (XU et al, 2017). As it is the current standard in computer vision (PONTI et al, 2017), all the aforementioned studies applied Deep Learning (DL) as a tool (see more details in section 2.1), exploring TL via architecture retraining or feature extraction. Because of the hierarchical structure of Deep Neural Networks (DNN), such methods are able to represent both low-level (shapes, borders, and colors) and high-level (texture and semantics) visual features (YOSINSKI et al, 2014;PONTI et al, 2017).…”
Section: Introductionmentioning
confidence: 99%
“…As it is the current standard in computer vision (PONTI et al, 2017), all the aforementioned studies applied Deep Learning (DL) as a tool (see more details in section 2.1), exploring TL via architecture retraining or feature extraction. Because of the hierarchical structure of Deep Neural Networks (DNN), such methods are able to represent both low-level (shapes, borders, and colors) and high-level (texture and semantics) visual features (YOSINSKI et al, 2014;PONTI et al, 2017). In Convolutional Neural Networks (CNNs), different processing layers can be incorporated, where convolutional, dense, and pooling are the most relevant ones.…”
Section: Introductionmentioning
confidence: 99%