Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence 2019
DOI: 10.24963/ijcai.2019/871
|View full text |Cite
|
Sign up to set email alerts
|

Taskonomy: Disentangling Task Transfer Learning

Abstract: Do visual tasks have relationships, or are they unrelated? For instance, could having surface normals simplify estimating the depth of an image? Intuition answers these questions positively, implying existence of a certain structure among visual tasks. Knowing this structure has notable values; it provides a principled way for identifying relationships across tasks, for instance, in order to reuse supervision among tasks with redundancies or solve many tasks in one system without piling up the complexity. W… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

10
530
2
1

Year Published

2019
2019
2020
2020

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 328 publications
(543 citation statements)
references
References 1 publication
10
530
2
1
Order By: Relevance
“…Studies have shown that using models pretrained on large datasets can effectively improve network performance and reduce training time in the target domain . Therefore, we evaluated the pretrained ResNet50 network on our dataset.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Studies have shown that using models pretrained on large datasets can effectively improve network performance and reduce training time in the target domain . Therefore, we evaluated the pretrained ResNet50 network on our dataset.…”
Section: Resultsmentioning
confidence: 99%
“…On the other hand, the labeling of medical images requires substantial professional knowledge, making it difficult to obtain high‐quality labeled data. Using a pretrained network, the dilemma caused by insufficient data can be solved to some extent . However, the pretrained network has its own limitations.…”
Section: Discussionmentioning
confidence: 99%
“…8 Because the anterior layers in a deep convolutional neural network only learn some contour features, such as boundaries, shapes, and colors, the anterior layers in same convolutional neural network have almost the same parameters, no matter what visual tasks are applied. 23 The input to our convolutional network is a fixed-size 224 × 224 red-green-blue (RGB) fabric image, and the only preprocessing we do is subtracting the mean RGB value from each pixel during training. The mean RGB value comes from the per-pixel average value of all 6000 fabric images in the train set.…”
Section: Fine-tuning Vgg16mentioning
confidence: 99%
“…Furthermore, these gradient-based methods can often be implemented to efficiently run on specialized hardware such as GPUs. Importantly, it is natural for gradient-based optimization methods to be combined for the joint optimization of multiple objectives or components of a model for end-to-end training [46,52]. In these joint optimization settings, signal from the underlying task can help inform the clustering algorithm and vice-versa.…”
Section: Introductionmentioning
confidence: 99%