2015
DOI: 10.1145/2766959
|View full text |Cite
|
Sign up to set email alerts
|

Learning visual similarity for product design with convolutional neural networks

Abstract: a) Query 1: Input scene and box (b) Project into 256D embedding (c) Results 2: use of product in-situ Convolutional Neural Network Learned Parameters θ (a) Query 2: Product (c) Results 1: visually similar products Figure 1: Visual search using a learned embedding. Query 1: given an input box in a photo (a), we crop and project into an embedding (b) using a trained convolutional neural network (CNN) and return the most visually similar products (c). Query 2: we apply the same method to search for in-situ exampl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
289
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 371 publications
(298 citation statements)
references
References 29 publications
0
289
0
Order By: Relevance
“…[137] 112,623 n.a Landmark MV RGB-D [142] 250,000 300 House. object Product [143] 101,945×2 n.a Furniture …”
Section: Image Retrieval With Fine-tuned Cnn Modelsmentioning
confidence: 99%
See 2 more Smart Citations
“…[137] 112,623 n.a Landmark MV RGB-D [142] 250,000 300 House. object Product [143] 101,945×2 n.a Furniture …”
Section: Image Retrieval With Fine-tuned Cnn Modelsmentioning
confidence: 99%
“…Another dataset called Tokyo Time Machine is collected using Google Street View Time Machine which provides images depicting the same places over time [137]. While most of the above datasets focus on landmarks, Bell et al [143] build a Product dataset consisting of furniture by developing a crowd-sourced pipeline to draw connections between in-situ objects and the corresponding products. It is also feasible to fine-tune on the query sets suggested in [144], but this method may not be adaptable to new query types.…”
Section: Datasets For Fine-tuningmentioning
confidence: 99%
See 1 more Smart Citation
“…Deep feature embedding with convolutional neural networks attracts a lot of attention recently, especially contrastive embedding [57] and triplet embedding [58]. Bell et al…”
Section: Deep Learningmentioning
confidence: 99%
“…For example, Kiapour et al applied CNN features as image representation, and calculated the cross-entropy loss measuring whether two images are matched or non-matched. Siamese neural networks have attracted a lot of attentions recently, such as contrastive embedding [57] and triplet embedding [58].…”
Section: Background and Motivationmentioning
confidence: 99%