2018
DOI: 10.1007/978-3-030-11027-7_20
|View full text |Cite
|
Sign up to set email alerts
|

Extraction of Visual Features for Recommendation of Products via Deep Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(3 citation statements)
references
References 10 publications
0
3
0
Order By: Relevance
“…However, product retrieval is more complicated than simple image retrieval due to the different shooting angles, conditions, backgrounds, or postures of the images [165][166][167][168][169]. For instance, clothing images taken on the street or in a store with a phone may differ from those in databases and e-commerce websites.…”
Section: Spark Creative Inspirationmentioning
confidence: 99%
“…However, product retrieval is more complicated than simple image retrieval due to the different shooting angles, conditions, backgrounds, or postures of the images [165][166][167][168][169]. For instance, clothing images taken on the street or in a store with a phone may differ from those in databases and e-commerce websites.…”
Section: Spark Creative Inspirationmentioning
confidence: 99%
“…The retrieval input can be text, images, or both of them [162,163,164,165,166]. For product, the input image provided by designers and users may be taken by their phone on the street or in a store, which is quite different from image databases and e-commerce websites in terms of shooting angle, condition, background, or posture [167,168,169,170,171]. Therefore, product retrieval is more complicated than a simple image retrieval [172,173,174].…”
Section: Product Design Based On Image Datamentioning
confidence: 99%
“…The VisNet architecture with parallel shallow neural net and VGG16 convolutional neural network (CNN) was fine-tuned like a siamese net, taking input triplets of a query image, a similar image and a negative example [14]. The clothing, shoes and jewelry from Amazon product dataset are recognized in [1] by an extraction of ResNet-based visual features and a special shallow net. Visual search and recommendations are implemented on Pinterest [21] using Web-scale object detection and indexing with very deep CNNs [3].…”
Section: Introductionmentioning
confidence: 99%