2022
DOI: 10.1109/tvt.2022.3149711
|View full text |Cite
|
Sign up to set email alerts
|

A New Framework for Multi-Hop ABS-Assisted 5G-Networks With Users’ Mobility Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 47 publications
0
9
0
Order By: Relevance
“…It enhances the prediction accuracy by implementing the attention block in the encoder at the cost of elevated computation. Another work exploits the encoder-decoder architecture in Transformer network model to predict users mobility which is used as an input for maximizing the coverage of Ariel Base Stations by optimizing their positioning [45]. However, performance of these models in terms of prediction error and accuracy is still not sufficiently reliable and consistent to warrant their inclusion in live wireless networks.…”
Section: Multi-output Approach For Multi-step Predictionmentioning
confidence: 99%
See 2 more Smart Citations
“…It enhances the prediction accuracy by implementing the attention block in the encoder at the cost of elevated computation. Another work exploits the encoder-decoder architecture in Transformer network model to predict users mobility which is used as an input for maximizing the coverage of Ariel Base Stations by optimizing their positioning [45]. However, performance of these models in terms of prediction error and accuracy is still not sufficiently reliable and consistent to warrant their inclusion in live wireless networks.…”
Section: Multi-output Approach For Multi-step Predictionmentioning
confidence: 99%
“…For example, AP ID 3 is converted into vector [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0] and 5 into vector [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]. This representation of elements is adapted for the comparison models, GRU-Attention (GRU-ATTN) model [44], and Transformer Network (TN) model [45].…”
Section: B Data Collection and Preprocessingmentioning
confidence: 99%
See 1 more Smart Citation
“…The number of times that the learning algorithm will work through the entire training dataset results of three, five, and seven steps ahead predictions for SFI model and ED model are depicted in Fig. 8 as a function of input sequence length, and compared against two stateof-the-art models GRU-ATTN [44], and TN [45]. For the comprehensive analyses, the results of CMD and OMD are separately illustrated in Figs.…”
Section: Epochmentioning
confidence: 99%
“…However, despite the potential features of UmBS, its deployment in real-world scenarios faces challenges too. The main challenges include the following: estimation of ground UEs position information [ 6 , 7 ], association of UEs to their serving UmBS [ 8 ], association of the UmBS to the core network [ 9 ], resource allocation [ 10 , 11 ], channel characterization [ 12 ], energy optimization [ 13 ], and trajectory optimization [ 14 , 15 ]. In particular, two issues are major and significantly affect the design of the UmBS deployment algorithm.…”
Section: Introductionmentioning
confidence: 99%