2020 14th International Conference on Ubiquitous Information Management and Communication (IMCOM) 2020
DOI: 10.1109/imcom48794.2020.9001672
|View full text |Cite
|
Sign up to set email alerts
|

Next Point-of-Attachment Selection Based on Long Short Term Memory Model in Wireless Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 8 publications
0
5
0
Order By: Relevance
“…The two proposed models are implemented using Keras and CUDA libraries on the hardware environment consisting of Intel i7 CPU, 64GB RAM, and RTX 2080 GPU. The dataset is divided into training and testing data with 70:30 ratio, to provide comprehensive performance evaluation while maintaining enough data for training [8], [53]. All four models are trained for 300 epochs (110 epochs for OMD) using the training data with batch size of 100.…”
Section: Results and Analysesmentioning
confidence: 99%
See 3 more Smart Citations
“…The two proposed models are implemented using Keras and CUDA libraries on the hardware environment consisting of Intel i7 CPU, 64GB RAM, and RTX 2080 GPU. The dataset is divided into training and testing data with 70:30 ratio, to provide comprehensive performance evaluation while maintaining enough data for training [8], [53]. All four models are trained for 300 epochs (110 epochs for OMD) using the training data with batch size of 100.…”
Section: Results and Analysesmentioning
confidence: 99%
“…For example, when the sequences ⟨s 1 , s 2 ⟩ and ⟨s 1 , s 2 , ..., s 5 ⟩, where s j is AP identifier at index j, are given with the target length 3, the normalization process results in removal of s 1 , s 2 and the longer sequence is converted to multiple subsequences ⟨s 1 , s 2 , s 3 ⟩, ⟨s 2 , s 3 , s 4 ⟩, ⟨s 3 , s 4 , s 5 ⟩. At the end of these three steps, we shuffle the order of created sequences to prevent biased batch that limits training performance of the models due to similar trend of subsequences from same mobile [8]. Algorithm 1 describes the path sequence manipulation process.…”
Section: B Data Collection and Preprocessingmentioning
confidence: 99%
See 2 more Smart Citations
“…Deep learning (DL)/Machine Learning (ML) approaches have recently achieved promising results for mobility management of gUEs [ 20 , 21 , 22 ]. In UAVs, a policy gradient based DRL utilizes the RSSI from gUEs to address the mobility management of UAV base stations for improved data rates in 3D-space [ 23 ].…”
Section: Literature Review and Backgroundmentioning
confidence: 99%