Proceedings of the 17th ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems 2014
DOI: 10.1145/2641798.2641816
|View full text |Cite
|
Sign up to set email alerts
|

Imputing missing values in sensor networks using sparse data representations

Abstract: Sensor networks are increasingly being used to provide timely information about the physical, urban and human environment. Algorithms that depend on sensor data often assume that the readings are complete. However, node failures or communication breakdowns result in missing data entries, preventing the use of such algorithms. To impute these missing values, we propose a method of exploiting spatial correlations which is based on the sparse autoencoder and inspired by the conditional Restricted Boltzmann Machin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 17 publications
(5 citation statements)
references
References 5 publications
0
5
0
Order By: Relevance
“…It is straightforward to implement and simple to understand MLA. KNN is one of the top MLAs [80], and data imputation algorithms based on KNN are widely used [81][82][83][84]. More complex and computationally extensive supervised learning algorithms than KNN, such as SVM, are also widely used for data imputation and handle both linear and nonlinear data efficiently [80,85].…”
Section: Data Imputationmentioning
confidence: 99%
“…It is straightforward to implement and simple to understand MLA. KNN is one of the top MLAs [80], and data imputation algorithms based on KNN are widely used [81][82][83][84]. More complex and computationally extensive supervised learning algorithms than KNN, such as SVM, are also widely used for data imputation and handle both linear and nonlinear data efficiently [80,85].…”
Section: Data Imputationmentioning
confidence: 99%
“…1. https://github.com/pytorch/fairseq/tree/main/examples/wav2vec 2. https://huggingface.co/microsoft/deberta-large 3. https://github.com/zengqunzhao/MA-Net AE [66] is widely utilized in incomplete multimodal learning [33], [67]. It leverages autoencoders to impute missing data from partially observed input.…”
Section: Baselinesmentioning
confidence: 99%
“…A more advanced approach is based on the autoencoder as a specific kind of neural network aiming to reconstruct inputs on its outputs. Here, one of the most commonly used models is the denoising autoencoder (DAE) [6], e.g., [16,17,18,19,20]. Typically, they are used in a discriminative way (see [19] for difference), meaning they impute a single value, which is deterministic once the network is trained.…”
Section: Related Workmentioning
confidence: 99%