2019
DOI: 10.48550/arxiv.1904.08492
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MultiNet++: Multi-Stream Feature Aggregation and Geometric Loss Strategy for Multi-Task Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…This, in fact, offers scalability for adding new tasks at a minimal computation complexity. [Chennupati et al, 2019b] provided a detailed overview on negligible incremental computational complexity while increasing number of joint tasks solved by a multi-task network. On the other hand, using pre-trained encoders (say ResNet [He et al, 2016]) as a common encoder stage in multi-task networks reduces training time and alleviates the daunting requirements of massive data to optimize.…”
Section: Proposed Multi-task Architecturementioning
confidence: 99%
“…This, in fact, offers scalability for adding new tasks at a minimal computation complexity. [Chennupati et al, 2019b] provided a detailed overview on negligible incremental computational complexity while increasing number of joint tasks solved by a multi-task network. On the other hand, using pre-trained encoders (say ResNet [He et al, 2016]) as a common encoder stage in multi-task networks reduces training time and alleviates the daunting requirements of massive data to optimize.…”
Section: Proposed Multi-task Architecturementioning
confidence: 99%