2021
DOI: 10.48550/arxiv.2109.09824
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Well Googled is Half Done: Multimodal Forecasting of New Fashion Product Sales with Image-based Google Trends

Abstract: This paper investigates the effectiveness of systematically probing Google Trends against textual translations of visual aspects as exogenous knowledge to predict the sales of brand-new fashion items, where past sales data is not available, but only an image and few metadata are available. In particular, we propose GTM-Transformer, standing for Google Trends Multimodal Transformer, whose encoder works on the representation of the exogenous time series, while the decoder forecasts the sales using the Google Tre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(13 citation statements)
references
References 33 publications
(46 reference statements)
0
13
0
Order By: Relevance
“…al. [8] criticised the reliance on purely AR networks for new product sale forecasting because of the compounding effect caused by first-step errors. Instead, they propose GTM-Transformer, a multi-modal, non-AR Transformer that utilizes images, text and time series of the garment's attributes collected from Google Trends.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…al. [8] criticised the reliance on purely AR networks for new product sale forecasting because of the compounding effect caused by first-step errors. Instead, they propose GTM-Transformer, a multi-modal, non-AR Transformer that utilizes images, text and time series of the garment's attributes collected from Google Trends.…”
Section: Related Workmentioning
confidence: 99%
“…Apart from the visual features of the garment, both [1] and [8] used fashion attributes as textual information. Fashion attributes can be extracted from images by specialised classification models without requiring manual annotation from the fashion designers and can offer valuable information to the overall neural network.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations