2018
DOI: 10.48550/arxiv.1803.09196
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Type-Aware Embeddings for Fashion Compatibility

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
13
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(14 citation statements)
references
References 0 publications
1
13
0
Order By: Relevance
“…Shih et al [22] propose to make multiple projection points of a query image. Vasileva et al [28] learn different metric spaces for different type combinations. While evaluating the outfit compatibility, these methods take the average of all pairwise compatibility as the output and neglect the relationship between the pairwise compatibility and the overall compatibility.…”
Section: Visual Compatibility Learningmentioning
confidence: 99%
See 4 more Smart Citations
“…Shih et al [22] propose to make multiple projection points of a query image. Vasileva et al [28] learn different metric spaces for different type combinations. While evaluating the outfit compatibility, these methods take the average of all pairwise compatibility as the output and neglect the relationship between the pairwise compatibility and the overall compatibility.…”
Section: Visual Compatibility Learningmentioning
confidence: 99%
“…This will leads to an improper situation e.g., if a shirt matches a trouser, the trouser then matches a shoe, the consequence is the shoe is forced to match the shirt. Therefore, we use the projected embedding with respect to different fashion type combinations to address the above problems, which refers to [3,28].…”
Section: Comparison With Projected Embeddingmentioning
confidence: 99%
See 3 more Smart Citations