The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
Procedings of the British Machine Vision Conference 2017 2017
DOI: 10.5244/c.31.104
|View full text |Cite
|
Sign up to set email alerts
|

Sampled Image Tagging and Retrieval Methods on User Generated Content

Abstract: Traditional image tagging and retrieval algorithms have limited value as a result of being trained with heavily curated datasets. These limitations are most evident when arbitrary search words are used that do not intersect with training set labels. Weak labels from user generated content (UGC) found in the wild (e.g., Google Photos, FlickR, etc.) have an almost unlimited number of unique words in the metadata tags. Prior work on word embeddings successfully leveraged unstructured text with large vocabularies,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 17 publications
0
1
0
Order By: Relevance
“…Fast0Tag [14] projects an image by identifying a principal direction in the space and targeting that principal direction when learning to project the image. [15] uses noise contrastive estimation on a noisy web-scale dataset [16] to learn projection from image to word embeddings space. VSE++ [17] proposes a modified pairwise ranking loss weighted by violation caused by hard-negatives.…”
Section: Image-text Retrievalmentioning
confidence: 99%
“…Fast0Tag [14] projects an image by identifying a principal direction in the space and targeting that principal direction when learning to project the image. [15] uses noise contrastive estimation on a noisy web-scale dataset [16] to learn projection from image to word embeddings space. VSE++ [17] proposes a modified pairwise ranking loss weighted by violation caused by hard-negatives.…”
Section: Image-text Retrievalmentioning
confidence: 99%