Objective monitoring of food intake and ingestive behavior in a free-living environment remains an open problem that has significant implications in study and treatment of obesity and eating disorders. In this paper, a novel wearable sensor system (automatic ingestion monitor, AIM) is presented for objective monitoring of ingestive behavior in free living. The proposed device integrates three sensor modalities that wirelessly interface to a smartphone: a jaw motion sensor, a hand gesture sensor, and an accelerometer. A novel sensor fusion and pattern recognition method was developed for subject-independent food intake recognition. The device and the methodology were validated with data collected from 12 subjects wearing AIM during the course of 24 h in which both the daily activities and the food intake of the subjects were not restricted in any way. Results showed that the system was able to detect food intake with an average accuracy of 89.8%, which suggests that AIM can potentially be used as an instrument to monitor ingestive behavior in free-living individuals.
Objective and automatic sensor systems to monitor ingestive behavior of individuals arise as a potential solution to replace inaccurate method of self-report. This paper presents a simple sensor system and related signal processing and pattern recognition methodologies to detect periods of food intake based on non-invasive monitoring of chewing. A piezoelectric strain gauge sensor was used to capture movement of the lower jaw from 20 volunteers during periods of quiet sitting, talking and food consumption. These signals were segmented into non-overlapping epochs of fixed length and processed to extract a set of 250 time and frequency domain features for each epoch. A forward feature selection procedure was implemented to choose the most relevant features, identifying from 4 to 11 features most critical for food intake detection. Support vector machine classifiers were trained to create food intake detection models. Twenty-fold cross-validation demonstrated per-epoch classification accuracy of 80.98% and a fine time resolution of 30 s. The simplicity of the chewing strain sensor may result in a less intrusive and simpler way to detect food intake. The proposed methodology could lead to the development of a wearable sensor system to assess eating behaviors of individuals.
Current, validated methods for dietary assessment rely on self-report, which tends to be inaccurate, time-consuming, and burdensome. The objective of this work was to demonstrate the suitability of estimating energy intake using individually-calibrated models based on Counts of Chews and Swallows (CCS models). In a laboratory setting, subjects consumed three identical meals (training meals) and a fourth meal with different content (validation meal). Energy intake was estimated by four different methods: weighed food records (gold standard), diet diaries, photographic food records, and CCS models. Counts of chews and swallows were measured using wearable sensors and video analysis. Results for the training meals demonstrated that CCS models presented the lowest reporting bias and a lower error as compared to diet diaries. For the validation meal, CCS models showed reporting errors that were not different from the diary or the photographic method. The increase in error for the validation meal may be attributed to differences in the physical properties of foods consumed during training and validation meals. However, this may be potentially compensated for by including correction factors into the models. This study suggests that estimation of energy intake from CCS may offer a promising alternative to overcome limitations of self-report.
Many methods for monitoring diet and food intake rely on subjects self-reporting their daily intake. These methods are subjective, potentially inaccurate and need to be replaced by more accurate and objective methods. This paper presents a novel approach that uses an Electroglottograph (EGG) device for an objective and automatic detection of food intake. Thirty subjects participated in a 4-visit experiment involving the consumption of meals with self-selected content. Variations in the electrical impedance across the larynx caused by the passage of food during swallowing were captured by the EGG device. To compare performance of the proposed method with a well-established acoustical method, a throat microphone was used for monitoring swallowing sounds. Both signals were segmented into non-overlapping epochs of 30 s and processed to extract wavelet features. Subject-independent classifiers were trained using Artificial Neural Networks, to identify periods of food intake from the wavelet features. Results from leave-one-out cross-validation showed an average per-epoch classification accuracy of 90.1% for the EGG-based method and 83.1% for the acoustic-based method, demonstrating the feasibility of using an EGG for food intake detection.
In Machine Learning applications, the selection of the classification algorithm depends on the problem at hand. This paper provides a comparison of the performance of the Support Vector Machine (SVM) and the Artificial Neural Network (ANN) for food intake detection. A combination of time domain (TD) and frequency domain (FD) features, extracted from signals captured using a jaw motion sensor, were used to train both types of classifiers. Data were collected from 12 subjects in free-living for a period of 24-hrs under unrestricted conditions. ANN with a different number of hidden layer neurons and SVMs with different kernels were trained using a leave one out cross validation scheme. ANN achieved an average accuracy of 86.86 ± 6.5 % whereas SVM (with linear kernel) achieved an average classification accuracy of 81.93 ± 9.22 %. Data collected from an independent subject in a separate study were used to evaluate the performance of these classifiers in-terms of the number of meals detected per day resulting in an accuracy of 72.72% for ANN and 63.63% for SVM. The results suggest that ANN may perform better than SVM for this specific problem.
Selection of the most representative features is important for any pattern recognition system. This paper investigates the importance of time domain (TD) and frequency domain (FD) features used for automatic food intake detection in a wearable sensor system by using Random Forests classification. Features were extracted from signals collected using 3 different sensor modalities integrated into the Automatic Ingestion Monitor (AIM): a jaw motion sensor, a hand gesture sensor and an accelerometer. Data was collected from 12 subjects wearing AIM in free-living for a 24-hr period where they experienced unrestricted intake. Features from the sensor signals were used to train the Random Forests classifier that estimated the importance of each feature as part of the training process. Results indicated that FD features from the jaw motion signal and TD features from the accelerometer signal were the most relevant features for food intake detection.
Automatic methods for food intake detection are needed to objectively monitor ingestive behavior of individuals in a free living environment. In this study, a pattern recognition system was developed for detection of food intake through the classification of jaw motion. A total of 7 subjects participated in laboratory experiments that involved several activities of daily living: talking, walking, reading, resting and food intake while being instrumented with a wearable jaw motion sensor. Inclusion of such activities provided a high variability to the sensor signal and thus challenged the classification task. A forward feature selection process decided on the most appropriate set of features to represent the chewing signal. Linear and RBF Support Vector Machine (SVM) classifiers were evaluated to find the most suitable classifier that can generalize the high variability of the input signal. Results showed that an average accuracy of 90.52% can be obtained using Linear SVM with a time resolution of 15 sec.
Monitoring Ingestive Behavior (MIB) of individuals is of special importance to identify and treat eating patterns associated with obesity and eating disorders. Current methods for MIB require subjects reporting every meal consumed, which is burdensome and tend to increase the reporting bias over time. This study presents an evaluation of the burden imposed by two wearable sensors for MIB during unrestricted food intake: a strain sensor to detect chewing events and a throat microphone to detect swallowing sounds. A total of 30 healthy subjects with various levels of adiposity participated in experiments involving the consumption of four meals in four different visits. A questionnaire was handled to subjects at the end of the last visit to evaluate the sensors burden in terms of the comfort levels experienced. Results showed that sensors presented high comfort levels as subjects indicated that the way they ate their meal was not considerably affected by the presence of the sensors. A statistical analysis showed that chewing sensor presented significantly higher comfort levels than the swallowing sensor. The outcomes of this study confirmed the suitability of the chewing and swallowing sensors for MIB and highlighted important aspects of comfort that should be addressed to obtain acceptable and less burdensome wearable sensors for MIB.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.