2020
DOI: 10.1109/access.2020.2998716
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Count of Bites and Chews From Videos of Eating Episodes

Abstract: Methods for measuring of eating behavior (known as meal microstructure) often rely on manual annotation of bites, chews, and swallows on meal videos or wearable sensor signals. The manual annotation may be time consuming and erroneous, while wearable sensors may not capture every aspect of eating (e.g. chews only). The aim of this study is to develop a method to detect and count bites and chews automatically from meal videos. The method was developed on a dataset of 28 volunteers consuming unrestricted meals i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(14 citation statements)
references
References 42 publications
0
14
0
Order By: Relevance
“…Details on the use of AI for the self-regulation of weight lossrelated behaviours are shown in Table 2. Of the studies on enhancing self-monitoring, twenty-nine (43•9 %) were on eating behaviours (58)(59)(60)(61)(62)(63)(64)(65)(66)(67)(68)(69)(70)(71)(72)(73)(74)(75)(76) , seven (10•6 %) were on energy intake (34,(77)(78)(79)(80)(81)(82) , thirty-three (50 %) were on physical activity (26,(51)(52)(53)(54)(55)60,74, and nine (13•6 %) were on energy expenditure (83,85,92,(94)(95)(96)(97)100,101) . Of the studies on optimising goal setting, five were on optimising eating behaviour goals (e.g., eating at a certain time of the day and energy intake) (48,49,53) and six were on optimising physical activity...…”
Section: Self-regulation Of Weight Loss-related Behavioursmentioning
confidence: 99%
See 2 more Smart Citations
“…Details on the use of AI for the self-regulation of weight lossrelated behaviours are shown in Table 2. Of the studies on enhancing self-monitoring, twenty-nine (43•9 %) were on eating behaviours (58)(59)(60)(61)(62)(63)(64)(65)(66)(67)(68)(69)(70)(71)(72)(73)(74)(75)(76) , seven (10•6 %) were on energy intake (34,(77)(78)(79)(80)(81)(82) , thirty-three (50 %) were on physical activity (26,(51)(52)(53)(54)(55)60,74, and nine (13•6 %) were on energy expenditure (83,85,92,(94)(95)(96)(97)100,101) . Of the studies on optimising goal setting, five were on optimising eating behaviour goals (e.g., eating at a certain time of the day and energy intake) (48,49,53) and six were on optimising physical activity...…”
Section: Self-regulation Of Weight Loss-related Behavioursmentioning
confidence: 99%
“…The studies reported recognition accuracies ranging from 69•2 to 99•1 %. Machine recognition techniques used in the included studies were gesture (n 32) (51,56,58,(60)(61)(62)64,65,70,74,81,(83)(84)(85)(86)(87)(88)(89)(90)(91)(92)(93)(95)(96)(97)(98)(99)(100)(101)(102)(103)(104) , image (n 14) (34,63,(66)(67)(68)74,76,(78)(79)(80)88,93,94,101) , sound (n 7) (57)(58)(59)69,(71)(72)…”
Section: Machine Perception: Self-monitoringmentioning
confidence: 99%
See 1 more Smart Citation
“…Similarly, Papapanagiotou et al [25] used convolutional neural networks to achieve a 98% accuracy and F1-score of 88.3%. Recently, Hossain et al [26] used a similar approach to detect faces, which they followed by transfer learning using AlexNet to classify images as bite or not, and used affine optical flow to detection rotational movement in the detect faces. They reported a mean accuracy of 88.9 ± 7.4% for chew count.…”
Section: Automatic Chew Countingmentioning
confidence: 99%
“…Whereas cameras provide opportunities to improve dietary intake assessment, advanced video image analysis techniques offer new opportunities in terms of automated detection of eating behaviors such as emotion detection (adults and children), acceptance and rejection behavior of infants and automated oral processing behaviors such as chews and bites in different age groups [78,79]. Deep learning models can be based on extracted data from the video, for example facial landmarks, and training the model on annotated events and their time.…”
Section: Video Image Analysis and Sensors To Improve Eating Behavior Measuresmentioning
confidence: 99%