2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01121
|View full text |Cite
|
Sign up to set email alerts
|

Robust Learning Through Cross-Task Consistency

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
71
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 109 publications
(77 citation statements)
references
References 20 publications
0
71
0
Order By: Relevance
“…Finally, multi-task learning was recently shown to improve robustness. For example, in [115] a multi-task learning strategy showed robustness against adversarial attacks, while [116] found that applying cross-task consistency in MTL improves generalization, and allows for domain shift detection.…”
Section: Othermentioning
confidence: 99%
“…Finally, multi-task learning was recently shown to improve robustness. For example, in [115] a multi-task learning strategy showed robustness against adversarial attacks, while [116] found that applying cross-task consistency in MTL improves generalization, and allows for domain shift detection.…”
Section: Othermentioning
confidence: 99%
“…In MTL, the goal is to reach high performance on multiple tasks simultaneously, so all tasks are main tasks and all tasks are auxiliary tasks. While the goal is different, many strategies in MTL such as parameter sharing [9], task consistency [55], and loss balance [15] are useful for learning with auxiliary tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Typical examples include Crossstitch [15], Sluice [53] and NDDR [54]. Compared with the learning upon single modalities, multitask learning is not always beneficial as the performance is likely to be harmed by the negative transfer (negative knowledge transfer across tasks), which is clarified in [55], [56], [57]. [58] distills information across different tasks with multimodal feature aggregation.…”
Section: Deep Multimodal Fusionmentioning
confidence: 99%
“…[58] distills information across different tasks with multimodal feature aggregation. [57], [59] explicitly enforce cycle-based consistency between domains to improve performance and generalization. In this paper, we integrate the benefits of both hard parameter-sharing and soft parameter-sharing.…”
Section: Deep Multimodal Fusionmentioning
confidence: 99%
See 1 more Smart Citation