2019
DOI: 10.1007/978-3-030-34995-0_53
|View full text |Cite
|
Sign up to set email alerts
|

A Deep Network for Automatic Video-Based Food Bite Detection

Abstract: Past research has now provided compelling evidence pointing towards correlations among individual eating styles and the development of (un)healthy eating patterns, obesity and other medical conditions. In this setting, an automatic, noninvasive food bite detection system can be a really useful tool in the hands of nutritionists, dietary experts and medical doctors in order to explore real-life eating behaviors and dietary habits. Unfortunately, the automatic detection of food bites can be challenging due to oc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
10
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(11 citation statements)
references
References 20 publications
1
10
0
Order By: Relevance
“…These features were then used for even more accurate modeling of hand, head, and mouth movements on a 2D plane. While our previous results already achieved a near-perfect agreement with manual video-clip annotations (κ = 0.879) [28], our current results improve the rate of agreement (current κ = 0.894), pointing to superb algorithm performance. On that level, RABiD outperforms all previous comparable efforts of video-based and wrist-worn accelerometer meal analyses.…”
Section: Discussionsupporting
confidence: 55%
See 2 more Smart Citations
“…These features were then used for even more accurate modeling of hand, head, and mouth movements on a 2D plane. While our previous results already achieved a near-perfect agreement with manual video-clip annotations (κ = 0.879) [28], our current results improve the rate of agreement (current κ = 0.894), pointing to superb algorithm performance. On that level, RABiD outperforms all previous comparable efforts of video-based and wrist-worn accelerometer meal analyses.…”
Section: Discussionsupporting
confidence: 55%
“…Regarding the internal RABiD outcome validation, we managed to increase our previously published performance [28], using higher resolution images for the extraction of skeletal features. These features were then used for even more accurate modeling of hand, head, and mouth movements on a 2D plane.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…This is highlighted through the consortium's deployment of novel wearable sensors, their creation of new nutrition and physical activity ontologies within the system and of unique user profile models. In addition to this, the consortium hopes to further the advances in technological sensors and their application to nutrition through the development of the following: a bowel sound/movement 'smart band', within-meal chew/bite detection algorithms (Konstantinidis et al 2019(Konstantinidis et al , 2020, a food identification algorithm (Theodoridis et al 2020) and a food depth perception algorithm to calculate portion sizes (Graikos et al 2020). Given the increase in noncommunicable diseases and rising levels of obesity within the UK, modern technology applications such as the PROTEIN application could be useful to provide end users with easily accessible validated tools to make positive lifestyle changes.…”
Section: Resultsmentioning
confidence: 99%
“…This model worked only to detect chewing events (classification) while not counting the number of chews. Recently, in [ 27 ] authors developed a deep network using features from both face and body to detect bite instances and count the number of bites, but did not count the number of chews in the eating episodes. A novel deep learning based algorithm named “Rapid Automatic Bite Detection” (RABid) [ 28 ] was developed by the authors that extracts and processes skeletal features from videos of eating episodes to measure meal duration and bites using long short-term memory (LSTM) network.…”
Section: Introductionmentioning
confidence: 99%