ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2022
DOI: 10.1109/icassp43922.2022.9747062
|View full text |Cite
|
Sign up to set email alerts
|

Dynimp: Dynamic Imputation for Wearable Sensing Data through Sensory and Temporal Relatedness

Abstract: In wearable sensing applications, data is inevitable to be irregularly sampled or partially missing, which pose challenges for any downstream application. An unique aspect of wearable data is that it is time-series data and each channel can be correlated to another one, such as x, y, z axis of accelerometer. We argue that traditional methods have rarely made use of both times-series dynamics of the data as well as the relatedness of the features from different sensors. We propose a model, termed as DynImp, to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 17 publications
0
1
0
Order By: Relevance
“…The core-TCN can be treated as a feature extractor or backbone of the network, and the MB layer is the output classifier. Existing literature has shown that the classifier component of the network suffers more from the imbalanced data issue, while the backbone portion is less susceptible during training [54], [55]. As such, we propose to train the core-TCN with the whole dataset and use a balanced sub-dataset to train each branching output in the MB layer to alleviate the impact of imbalanced data on the classifier.…”
Section: Multi-branching Outputsmentioning
confidence: 99%
“…The core-TCN can be treated as a feature extractor or backbone of the network, and the MB layer is the output classifier. Existing literature has shown that the classifier component of the network suffers more from the imbalanced data issue, while the backbone portion is less susceptible during training [54], [55]. As such, we propose to train the core-TCN with the whole dataset and use a balanced sub-dataset to train each branching output in the MB layer to alleviate the impact of imbalanced data on the classifier.…”
Section: Multi-branching Outputsmentioning
confidence: 99%