2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00391
|View full text |Cite
|
Sign up to set email alerts
|

Taskonomy: Disentangling Task Transfer Learning

Abstract: Do visual tasks have a relationship, or are they unrelated? For instance, could having surface normals simplify estimating the depth of an image? Intuition answers these questions positively, implying existence of a structure among visual tasks. Knowing this structure has notable values; it is the concept underlying transfer learning and provides a principled way for identifying redundancies across tasks, e.g., to seamlessly reuse supervision among related tasks or solve many tasks in one system without piling… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
223
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 648 publications
(256 citation statements)
references
References 75 publications
(94 reference statements)
1
223
0
Order By: Relevance
“…MTL has been applied to computer vision problems such as joint object and action detection [19], object detection and segmentation [14] and boundary, surface normal and saliency estimation together with object segmentation and detection [23]. In [57], the relationship between tasks is modelled in a latent space to transfer knowledge between them and reduce the number of required training samples. MTL in egocentric vision appears in [1,28,25,18,29,47].…”
Section: Multitask Learningmentioning
confidence: 99%
“…MTL has been applied to computer vision problems such as joint object and action detection [19], object detection and segmentation [14] and boundary, surface normal and saliency estimation together with object segmentation and detection [23]. In [57], the relationship between tasks is modelled in a latent space to transfer knowledge between them and reduce the number of required training samples. MTL in egocentric vision appears in [1,28,25,18,29,47].…”
Section: Multitask Learningmentioning
confidence: 99%
“…Up until now, most detection and segmentation tasks have relied on ImageNet [63] fine-tuning [13,87]. With fine-tuning, learned parameters or features of source tasks may be forgotten after learning target tasks [29], and domain similarity between tasks being important for transfer learning [89]. Furthermore, transferring knowledge between dissimilar tasks may cause negative transfer [62,79].…”
Section: Introductionmentioning
confidence: 99%
“…Transfer learning is a method for imparting the knowledge of models trained with large amounts of data to the models in other domains by using pretrained weights with large‐size data sets, instead of using randomly initialized weights of CNN. This learning method not only accelerates the convergence of CNN but also improves the performance of the model trained with small amount of training data (Zamir et al., ). It is successful at solving computer vision problems in domains such as construction equipment detection (Kim, Kim, Hong, & Byun, ), pavement crack detection (Gopalakrishnan et al., ; Zhang et al., ; Zhang et al., ), baggage recognition using X‐ray images (Akçay, Kundegorski, Devereux, & Breckon, ), and saliency prediction in natural videos (Chaabouni, Benois‐Pineau, & Amar, ) with relatively little training data.…”
Section: Related Workmentioning
confidence: 99%