2016
DOI: 10.1007/s11390-016-1642-6
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Task Learning for Food Identification and Analysis with Deep Convolutional Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
18
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 53 publications
(21 citation statements)
references
References 18 publications
0
18
0
2
Order By: Relevance
“…To our best knowledge, there has never been a DCNN-based food recognition algorithm developed for Korean food. One of the challenges we faced was the unique characteristics of Korean foods [13]. Input images were different in terms of shape, texture, size and color as the Korean foods lack a typical or generalized layout.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…To our best knowledge, there has never been a DCNN-based food recognition algorithm developed for Korean food. One of the challenges we faced was the unique characteristics of Korean foods [13]. Input images were different in terms of shape, texture, size and color as the Korean foods lack a typical or generalized layout.…”
Section: Discussionmentioning
confidence: 99%
“…A Convolutional Neural Network (CNN) usually consists of convolutional layers and pooling layers [13]. Notations w and h represent width and height, ch is the RGB color channels of the input image I (w, h, ch) .…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…However, it remains challenging for artificial intelligent agents to do so. During the past two decades, many researchers have considered various related fields, such as image aesthetic assessment [2][3][4] and food image analysis [5][6][7]. Some have already explored aesthetic assessment of food images [8], but they resorted to hand-crafted visual features and did not perform quantitative studies on a large-scale dataset.…”
Section: Introductionmentioning
confidence: 99%