Most clinical and biomedical data contain missing values. A patient’s record may be split across multiple institutions, devices may fail, and sensors may not be worn at all times. While these missing values are often ignored, this can lead to bias and error when the data are mined. Further, the data are not simply missing at random. Instead the measurement of a variable such as blood glucose may depend on its prior values as well as that of other variables. These dependencies exist across time as well, but current methods have yet to incorporate these temporal relationships as well as multiple types of missingness. To address this, we propose an imputation method (FLk-NN) that incorporates time lagged correlations both within and across variables by combining two imputation methods, based on an extension to k-NN and the Fourier transform. This enables imputation of missing values even when all data at a time point is missing and when there are different types of missingness both within and across variables. In comparison to other approaches on three biological datasets (simulated and actual Type 1 diabetes datasets, and multi-modality neurological ICU monitoring) the proposed method has the highest imputation accuracy. This was true for up to half the data being missing and when consecutive missing values are a significant fraction of the overall time series length.
Activity recognition has many health applications, from helping individuals track meals and exercise to providing treatment reminders to people with chronic illness and improving closed-loop control of diabetes. While eating is one of the most fundamental health-related activities, it has proven difficult to recognize accurately and unobtrusively. Body-worn and environmental sensors lack the needed specificity, while acoustic and accelerometer sensors worn around the neck may be intrusive and uncomfortable. We propose a new approach to identifying eating based on head movement data from Google Glass. We develop the Glass Eating and Motion (GLEAM) dataset using sensor data collected from 38 participants conducting a series of activities including eating. We demonstrate that head movement data are sufficient to allow recognition of eating with high precision and minimal impact on privacy and comfort.
While many sensors can monitor physical activity, there is no device that can unobtrusively measure eating at the same level of detail. Yet, tracking and reacting to food consumption is key to managing many chronic diseases such as obesity and diabetes. Eating recognition has primarily used a single sensor at a time in a constrained environment but sensors may fail and each may pick up different types of eating. We present a multi-modality study of eating recognition, which combines head and wrist motion (Google Glass, smartwatches on each wrist), with audio (custom earbud microphone). We collect 72 hours of data from 6 participants wearing all sensors and eating an unrestricted set of foods, and annotate video recordings to obtain ground truth. Using our noise cancellation method, audio sensing alone achieved 92% precision and 89% recall in finding meals, while motion sensing was needed to find individual intakes. CCS Concepts •Human-centered computing → Ubiquitous and mobile computing;
Graph Convolutional Networks (GCNs) have been widely applied in various fields due to their significant power on processing graph-structured data. Typical GCN and its variants work under a homophily assumption (i.e., nodes with same class are prone to connect to each other), while ignoring the heterophily which exists in many real-world networks (i.e., nodes with different classes tend to form edges). Existing methods deal with heterophily by mainly aggregating higher-order neighborhoods or combing the immediate representations, which leads to noise and irrelevant information in the result. But these methods did not change the propagation mechanism which works under homophily assumption (that is a fundamental part of GCNs). This makes it difficult to distinguish the representation of nodes from different classes. To address this problem, in this paper we design a novel propagation mechanism, which can automatically change the propagation and aggregation process according to homophily or heterophily between node pairs. To adaptively learn the propagation process, we introduce two measurements of homophily degree between node pairs, which is learned based on topological and attribute information, respectively. Then we incorporate the learnable homophily degree into the graph convolution framework, which is trained in an end-to-end schema, enabling it to go beyond the assumption of homophily. More importantly, we theoretically prove that our model can constrain the similarity of representations between nodes according to their homophily degree. Experiments on seven real-world datasets demonstrate that this new approach outperforms the state-of-the-art methods under heterophily or low homophily, and gains competitive performance under homophily.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.