2020
DOI: 10.1109/tmm.2019.2945180
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Attention Network for Visually-Aware Food Recommendation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 66 publications
(19 citation statements)
references
References 47 publications
0
19
0
Order By: Relevance
“…A few of the approaches proposed include user information into the recommendation procedures (i.e., collaborative filtering). Still, they only considered similar users based on the overlapping rated recipes, ignoring the relational information among users, recipes, or ingredients (Freyne and Berkovsky, 2010 ; Forbes and Zhu, 2011 ; Ge et al, 2015 ; Vivek et al, 2018 ; Khan et al, 2019 ; Gao et al, 2020 ). For example, Yang et al ( 2017 ) developed a framework to learn food preference based on the item-wise and pairwise recipe image comparisons.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…A few of the approaches proposed include user information into the recommendation procedures (i.e., collaborative filtering). Still, they only considered similar users based on the overlapping rated recipes, ignoring the relational information among users, recipes, or ingredients (Freyne and Berkovsky, 2010 ; Forbes and Zhu, 2011 ; Ge et al, 2015 ; Vivek et al, 2018 ; Khan et al, 2019 ; Gao et al, 2020 ). For example, Yang et al ( 2017 ) developed a framework to learn food preference based on the item-wise and pairwise recipe image comparisons.…”
Section: Related Workmentioning
confidence: 99%
“…Existing recipe recommendation approaches are mostly based on the similarity between recipes (Yang et al, 2017 ; Chen et al, 2020 ). A few of the approaches tried to take the user information into account (Freyne and Berkovsky, 2010 ; Forbes and Zhu, 2011 ; Ge et al, 2015 ; Vivek et al, 2018 ; Khan et al, 2019 ; Gao et al, 2020 ), but they only defined similar users based on the overlapping rated recipes between users, while ignoring the relational information between users, recipes, or ingredients. Nevertheless, user preference toward food is complex.…”
Section: Introductionmentioning
confidence: 99%
“…Table 1 summarizes the statistics of the datasets. One, called Allrecipes, was crawled from Allrecipes.com by Gao et al in [4]. Each of the interactions in Allrecipes represents that the user has tried this recipe.…”
Section: Experiments 41 Experimental Setupmentioning
confidence: 99%
“…We evaluate our proposed method on the same dataset as proposed in [30]. We choose this public dataset because it has various types of non-stationary noise and we can compare our results with other published work.…”
Section: A Datasetmentioning
confidence: 99%
“…To further preserve linguistic information and capture contextual relationship in enhancing process, a multi-attention mechanism is employed in U-shape generators, which consist of encoding layers, transformation blocks, and decoding layers. Attention mechanism can compute the long-range relative dependencies among elements in sequences [28], which has been widely used both in the computer vision field and speech area [29], [30]. The proposed multi-attention uses attention mechanism in two different ways: attention gates in U-net [31] encoding-decoding layers (AU gate) and self-attention [32] in dilated residual networks [33] (DRN-SA block).…”
Section: Introductionmentioning
confidence: 99%