Proceedings of the 2019 on International Conference on Multimedia Retrieval 2019
DOI: 10.1145/3323873.3325009
|View full text |Cite
|
Sign up to set email alerts
|

Learning Task Relatedness in Multi-Task Learning for Images in Context

Abstract: Multimedia applications often require concurrent solutions to multiple tasks. These tasks hold clues to each-others solutions, however as these relations can be complex this remains a rarely utilized property. When task relations are explicitly defined based on domain knowledge multi-task learning (MTL) offers such concurrent solutions, while exploiting relatedness between multiple tasks performed over the same dataset. In most cases however, this relatedness is not explicitly defined and the domain expert kno… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
20
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(24 citation statements)
references
References 48 publications
(60 reference statements)
1
20
0
Order By: Relevance
“…In our approach, by jointly learning related artistic tasks, the resulting visual representations are enforced to capture relationships and common elements between the different artistic attributes, such as author, school, type, or period, and thus, providing contextual information about each painting. In parallel with our work, Strezoski et al [52] also show outstanding improvements in an art classification dataset by using MTL strategies, which encourage our claim that context is strongly beneficial in automatic art analysis.…”
Section: Multitask Learningsupporting
confidence: 87%
“…In our approach, by jointly learning related artistic tasks, the resulting visual representations are enforced to capture relationships and common elements between the different artistic attributes, such as author, school, type, or period, and thus, providing contextual information about each painting. In parallel with our work, Strezoski et al [52] also show outstanding improvements in an art classification dataset by using MTL strategies, which encourage our claim that context is strongly beneficial in automatic art analysis.…”
Section: Multitask Learningsupporting
confidence: 87%
“…Having learned a model from a collection of (x, y) pairs, we can then use this model to predict the target variable y ′ for a data point we haven't seen before (x ′ ). Deep Learning methods have enjoyed great success in Computer Vision in classifying images [7] and also been successfully applied to Digital Humanities [17,20,21].…”
Section: Deep Learning Methodsmentioning
confidence: 99%
“…Nonetheless, these relate to ours in that they explore how and to what extent approaches developed and trained on contemporary material can be re-purposed for historical material or material which is visually distinct from typical training data. Across various datasets, data types, and previous works, the potential of building on top of a pre-trained deep learning model has been shown [16][17][18], which informs our choice for how to develop and train our model when applied to silent film material.…”
Section: Visual Cultural Heritage Analysismentioning
confidence: 99%
“…Adjacent to these empirical studies to analyze multi-task task relationships is a principled method for learning these relationships online during training without trial-and-error task grouping, called Selective Sharing (Strezoski et al, 2019b). Selective sharing uses a shared trunk architecture to handle multiple tasks, and clusters tasks into groups based on the similarity of their gradient vectors throughout training.…”
Section: Grouping Tasksmentioning
confidence: 99%