2017
DOI: 10.48550/arxiv.1702.08319
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Visual Translation Embedding Network for Visual Relation Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 23 publications
(2 citation statements)
references
References 9 publications
0
2
0
Order By: Relevance
“…Our work also has similarities to knowledge bases, where predicates are often projections in a defined semantic space [3,6,22]. Such a method was recently used for visual relationship detection [43]. While these methods have seen success in knowledge base completion tasks, they have only led to a marginal gain for modelling visual relationships.…”
Section: Related Workmentioning
confidence: 99%
“…Our work also has similarities to knowledge bases, where predicates are often projections in a defined semantic space [3,6,22]. Such a method was recently used for visual relationship detection [43]. While these methods have seen success in knowledge base completion tasks, they have only led to a marginal gain for modelling visual relationships.…”
Section: Related Workmentioning
confidence: 99%
“…An alternative is to list all important objects with their attributes and relationships. Johnson et al [17] created scene graphs, which are being used for visual relationship detection [27,30,49] tasks. In [25], the authors exploit scene graphs to generate image captions.…”
Section: Related Workmentioning
confidence: 99%