2022
DOI: 10.1016/j.asoc.2022.108624
|View full text |Cite
|
Sign up to set email alerts
|

A hierarchical auxiliary deep neural network architecture for large-scale indoor localization based on Wi-Fi fingerprinting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(13 citation statements)
references
References 14 publications
0
6
0
Order By: Relevance
“…In this section, to evaluate the influence of differences in datasets from different months on the robustness of the model in a dynamic environment, we use each month's datasets as a validation of the VITL algorithmand compare it with the conventional machine learning approach and the deep learning approach. We used five conventional machine learning algorithms, SVM [22], KNN [23], RF [24], DT [25], and GNB [26], for comparison, after which they showed the best performance with those in [44] and [45] a baseline neural network comprising two fully connected hidden layers, with 128 and 68 nodes, and five deep learning algorithms CNN [27], C-FNN1, HADNN1 [46] and rrifloc [47] were compared. In figure 5, the DT algorithm shows poor robustness with other conventional machine learning algorithms, such as the GNB algorithm, after the ninth month, although the other conventional machine learning algorithms are slightly better but still start to float more in the ninth month.…”
Section: Algorithmic Comparison Of Vtil and Conventional Machine Lear...mentioning
confidence: 99%
“…In this section, to evaluate the influence of differences in datasets from different months on the robustness of the model in a dynamic environment, we use each month's datasets as a validation of the VITL algorithmand compare it with the conventional machine learning approach and the deep learning approach. We used five conventional machine learning algorithms, SVM [22], KNN [23], RF [24], DT [25], and GNB [26], for comparison, after which they showed the best performance with those in [44] and [45] a baseline neural network comprising two fully connected hidden layers, with 128 and 68 nodes, and five deep learning algorithms CNN [27], C-FNN1, HADNN1 [46] and rrifloc [47] were compared. In figure 5, the DT algorithm shows poor robustness with other conventional machine learning algorithms, such as the GNB algorithm, after the ninth month, although the other conventional machine learning algorithms are slightly better but still start to float more in the ninth month.…”
Section: Algorithmic Comparison Of Vtil and Conventional Machine Lear...mentioning
confidence: 99%
“…In 2022, Wang et al designed a novel DNN-based indoor positioning framework called CHISEL, which combines a convolutional encoder and a CNN classifier [ 37 ]. Jaehoon Cha et al propose a hierarchical auxiliary deep neural network called HADNN, which uses a continuous feedforward network to identify buildings and estimate the floor coordinates [ 38 ].…”
Section: Related Workmentioning
confidence: 99%
“…To further evaluate the performance of our improved DAE and enhanced DNN algorithms, we compare them with state-of-the-art deep learning indoor positioning algorithms, including SAEDNN [ 34 ], CNNLoc [ 35 ], CCpos [ 36 ], CHISEL [ 37 ], CHISEL-DA [ 37 ], and HADNN [ 38 ]. The results are shown in Table 4 and Figure 8 , respectively.…”
Section: Performance Evaluationmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, KNN relies on stationary signal information and does not handle temporal dynamics well, further limiting its effectiveness. In contrast, deep neural networks (DNNs) can address the issues in large-scale indoor localization [13], [14], which can model complex relationships between the input features and output labels and thereby efficiently handle the spatial variability and dynamic signals encountered in large-scale multi-building and multi-floor indoor localization. In addition to classical feedforward neural networks (FNNs) used in earlier works (e.g., [13]), more advanced DNNs like convolutional neural networks (CNNs) [15], [16] and recurrent neural networks (RNNs) [17] are employed as well due to their improved robustness and generalization capability.…”
Section: Introductionmentioning
confidence: 99%