2017 IEEE International Conference on Computer Vision Workshops (ICCVW) 2017
DOI: 10.1109/iccvw.2017.54
|View full text |Cite
|
Sign up to set email alerts
|

Multi-task Learning Using Multi-modal Encoder-Decoder Networks with Shared Skip Connections

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(17 citation statements)
references
References 14 publications
0
17
0
Order By: Relevance
“…Some works [40,51,18,26] explored simultaneously learning the depth estimation and the scene parsing tasks. For instance, Wang et al [51] introduced an approach to model the two tasks within a hierarchical CRF, while the CRF model is not jointly learned with the CNN.…”
Section: Related Workmentioning
confidence: 99%
“…Some works [40,51,18,26] explored simultaneously learning the depth estimation and the scene parsing tasks. For instance, Wang et al [51] introduced an approach to model the two tasks within a hierarchical CRF, while the CRF model is not jointly learned with the CNN.…”
Section: Related Workmentioning
confidence: 99%
“…Multi-task learning [11,8] shown to improve the performance of different tasks with auxiliary objective functions. We explore an unsupervised reconstruction task that seeks to reproduce the sequential US slices to aid the weak supervision of the segmentation task.…”
Section: Related Workmentioning
confidence: 99%
“…We have outlined above new approaches in the digital humanities that are enabled by the dataset and benchmark evaluation tasks. Multitask learning systems that learn on multimodal data are also an active area of research in relation to multimodal representation learning, location estimation, and scene understanding [5,28]. MLM is further designed to evaluate the ability for multitask systems to leverage relationships between constituent entities in data and knowledge graph properties used in the generation process.…”
Section: Impactmentioning
confidence: 99%