2017
DOI: 10.1109/jsen.2017.2730226
|View full text |Cite
|
Sign up to set email alerts
|

Device-Free Localization via Dictionary Learning with Difference of Convex Programming

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 15 publications
(5 citation statements)
references
References 39 publications
0
5
0
Order By: Relevance
“…For single-target localization, fingerprints can be exhaustively collected, but for multitarget localization, building the database becomes impractical (Sabek and Youssef 2012;Sabek, oustafa Youssef, and Vasilakos 2015). Some methods address this issue by training on data from a single person and testing on multiple people, achieving good results by using probabilistic classification models (Xu et al 2012), sequential counting and parallel localization of each target (Xu et al 2013), crosscalibration (Sabek and Youssef 2012), conditional random Markov field (Sabek, oustafa Youssef, and Vasilakos 2015), and dictionary learning (Li et al 2017). However, these methods trained on data from a single person assume target sparsity, which limits their accuracy with nearby targets.…”
Section: Related Workmentioning
confidence: 99%
“…For single-target localization, fingerprints can be exhaustively collected, but for multitarget localization, building the database becomes impractical (Sabek and Youssef 2012;Sabek, oustafa Youssef, and Vasilakos 2015). Some methods address this issue by training on data from a single person and testing on multiple people, achieving good results by using probabilistic classification models (Xu et al 2012), sequential counting and parallel localization of each target (Xu et al 2013), crosscalibration (Sabek and Youssef 2012), conditional random Markov field (Sabek, oustafa Youssef, and Vasilakos 2015), and dictionary learning (Li et al 2017). However, these methods trained on data from a single person assume target sparsity, which limits their accuracy with nearby targets.…”
Section: Related Workmentioning
confidence: 99%
“…Here, 442 atoms represent 34 redundant individual locations. From the phase transition graph of sparse coding [45], we learn that the low dimension m may influence the success rate when finding the accurate sparse solution. The high coherence among different signals may result in a severe situation.…”
Section: Data Preprocessingmentioning
confidence: 99%
“…Employing a different regularizer can drive a different type of sparse coding algorithm. During the various kinds of regularizers [11][12][13], the most famous regularizers are l 1 norm and have been popularly employed in many DFL related works [10,14]. Usually, since the elements' indices of sparse solution are associated with the grids' ID, the target is located by selecting the maximum in a sparse solution while the l 1 norm is used as the regularizer.…”
Section: Introductionmentioning
confidence: 99%
“…Contrasting to the related works [14,15] and our previous work [10] on employing l 1 norm, in this paper, we proposed to exploit the new regularizer via l 2,1 norm for measuring the block-sparsity, so that the DFL algorithm can achieve a robust process in the challenging environments. That is, the collected multiple samples are considered as one group in the sensing matrix when an obstacle is at each grid.…”
Section: Introductionmentioning
confidence: 99%