2016
DOI: 10.1109/jsen.2015.2469095
|View full text |Cite
|
Sign up to set email alerts
|

AutoDietary: A Wearable Acoustic Sensor System for Food Intake Recognition in Daily Life

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
83
0
2

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 115 publications
(85 citation statements)
references
References 24 publications
0
83
0
2
Order By: Relevance
“…[1] compared six wearable microphone locations for recording chewing sounds and observed that the inner ear location provides the highest acoustic signal intensity. [9] presented the AutoDietary system for food type recognition and obtained an 84.9% accuracy in identifying food types between 7 types. Meanwhile, in [30], the authors modified a hearing aid to include two microphones (one in-ear and one for ambient noise), also in an attempt for food type classification.…”
Section: Related Workmentioning
confidence: 99%
“…[1] compared six wearable microphone locations for recording chewing sounds and observed that the inner ear location provides the highest acoustic signal intensity. [9] presented the AutoDietary system for food type recognition and obtained an 84.9% accuracy in identifying food types between 7 types. Meanwhile, in [30], the authors modified a hearing aid to include two microphones (one in-ear and one for ambient noise), also in an attempt for food type classification.…”
Section: Related Workmentioning
confidence: 99%
“…Depending on the studies and approaches, different models and classifiers for event detection will be utilized. We take into account Bi et al [90] and Lopez-Meyer et al [88]. In these systems, the first step is undeniably sound recording with an 8000-Hz sampling rate with an amplifier.…”
Section: Eating Behavior/food Intake Detectionmentioning
confidence: 99%
“…The next step includes framing to produce sound frames for event detection afterward. Bi et al [90] employed the hidden Markov model (HMM) for event detection. This model has long been utilized for sound recognition [138,139].…”
Section: Eating Behavior/food Intake Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…To convert image or video to text we need to use NLP and based on the information gathered we need to transfer the specific issue to the respective department [15].…”
Section: Machine Learning (Predictions Phase)mentioning
confidence: 99%