2021 17th International Conference on Mobility, Sensing and Networking (MSN) 2021
DOI: 10.1109/msn53354.2021.00087
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning on Visual and Location Data for V2I mmWave Beamforming

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(5 citation statements)
references
References 21 publications
0
5
0
Order By: Relevance
“…Given this possible high amount of data and the increasing dif iculty of the several tasks performed in the network, ML techniques have been adopted in several works [7,8], especially Deep Neural Networks (DNNs). DNNs are optimized for an application, in general, with supervised learning approaches, which may require a prohibitive amount of data, depending on the model being trained.…”
Section: Introductionmentioning
confidence: 99%
“…Given this possible high amount of data and the increasing dif iculty of the several tasks performed in the network, ML techniques have been adopted in several works [7,8], especially Deep Neural Networks (DNNs). DNNs are optimized for an application, in general, with supervised learning approaches, which may require a prohibitive amount of data, depending on the model being trained.…”
Section: Introductionmentioning
confidence: 99%
“…They extended the ViWi-BT challenge dataset into two datasets presenting the blockage-prediction and object-detection datasets. Reus-Muns et al utilized in [12] spatial information of mobile users such as location, speed, and surrounding scene images and proposed a method called channel covariance matrix to estimate the moving region containing all possible user locations at any given time and later they introduced deep learning based denoising method to reduce the error of the user locations not estimated accurately. In [13], Roy et al presented a survey related to deep learning-based fusion framework leveraging the GPS location information with the combination of the visual data.…”
Section: Introductionmentioning
confidence: 99%
“…The proposed setting by Klautau et al [37] and Dias et al [38] comes closest to ours with GPS and LiDAR being used as the side information for LOS detection and also reducing the overhead in a vehicular setting. On the other hand, Muns et al [39] use GPS and camera images to speed up the beam selection with a focus on designing preprocessing step for images and fusion scheme.…”
Section: With Sensor Fusionmentioning
confidence: 99%
“…In [70] Klautau et al [37] and Dias et al [38] propose to reduce the sector search space using GPS and LiDAR sensors in vehicular settings. On the other hand, Muns et al [72] use GPS and camera images to speed up the beam selection. Nevertheless, none of this literature considers real-world experiments on live sensor data.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation