2017
DOI: 10.1109/tmm.2016.2639382
|View full text |Cite
|
Sign up to set email alerts
|

Being a Supercook: Joint Food Attributes and Multimodal Content Modeling for Recipe Retrieval and Exploration

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
40
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 91 publications
(43 citation statements)
references
References 38 publications
0
40
0
Order By: Relevance
“…[Su et al 2014] treated the ingredients as features and constructed different classifiers to predict cuisine labels of recipes. [Min et al 2017a] utilized a multimodal deep Boltzmann machine to explore both visual and ingredient information for multi-modal recipe classification. In addition, [Druck 2013] utilized various information from the recipe, including the title, a set of ingredients and an ordered list of preparation steps for recipe attribute (such as tastes and flavors) prediction.…”
Section: Recognitionmentioning
confidence: 99%
See 1 more Smart Citation
“…[Su et al 2014] treated the ingredients as features and constructed different classifiers to predict cuisine labels of recipes. [Min et al 2017a] utilized a multimodal deep Boltzmann machine to explore both visual and ingredient information for multi-modal recipe classification. In addition, [Druck 2013] utilized various information from the recipe, including the title, a set of ingredients and an ordered list of preparation steps for recipe attribute (such as tastes and flavors) prediction.…”
Section: Recognitionmentioning
confidence: 99%
“…[Chen et al 2017a] exploited rich food attributes for cross-modal recipe retrieval. [Min et al 2017a] utilized a multi-modal Deep Boltzmann Machine for recipe-image retrieval. [Salvador et al 2017] developed a hybrid neural Fig.…”
Section: Referencementioning
confidence: 99%
“…In the past few years, labeled image datasets have played a critical role in high-level image understanding [Simonyan, 2007;Min, 2016;Zhao, 2018;Zhang, 2017;Xie, 2019;Shu, 2018;Wang, 2015;Hu, 2017;Hua, 2017;Liu, 2018;Huang, 2018;Xu, 2017]. However, the process of constructing manually labeled datasets is both time-consuming and labor-intensive [Deng, 2009].…”
Section: Introductionmentioning
confidence: 99%
“…The user comment on a particular food can be used to provide an interesting insight to understand the eating habits, cuisines and cultures of food [1]. However, the collective information can be used by the automated model to provide better-classified and segmented accuracy [2].…”
Section: Introductionmentioning
confidence: 99%