2020
DOI: 10.1109/tcomm.2020.2979124
|View full text |Cite
|
Sign up to set email alerts
|

Learn to Compress CSI and Allocate Resources in Vehicular Networks

Abstract: Resource allocation has a direct and profound impact on the performance of vehicle-to-everything (V2X) networks. In this paper, we develop a hybrid architecture consisting of centralized decision making and distributed resource sharing (the C-Decision scheme) to maximize the long-term sum rate of all vehicles. To reduce the network signaling overhead, each vehicle uses a deep neural network to compress its observed information that is thereafter fed back to the centralized decision making unit. The centralized… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
24
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4
1

Relationship

2
7

Authors

Journals

citations
Cited by 46 publications
(24 citation statements)
references
References 38 publications
0
24
0
Order By: Relevance
“…In contrast, the random baseline does not enjoy such intelligence and the more vulnerable Link 3 fails the task finally. We carry the multi-agent RL idea further in [90], where we restructure our network architecture substantially to enable a centralized decision making yet with very low signaling overhead. In particular, each V2V transmitter constructs a DNN that learns to compress its local observation (measurement), which is then fed back to the central base station to serve as the input of the stored DQN.…”
Section: Joint Spectrum and Power Allocation: Application Example mentioning
confidence: 99%
“…In contrast, the random baseline does not enjoy such intelligence and the more vulnerable Link 3 fails the task finally. We carry the multi-agent RL idea further in [90], where we restructure our network architecture substantially to enable a centralized decision making yet with very low signaling overhead. In particular, each V2V transmitter constructs a DNN that learns to compress its local observation (measurement), which is then fed back to the central base station to serve as the input of the stored DQN.…”
Section: Joint Spectrum and Power Allocation: Application Example mentioning
confidence: 99%
“…The number of source vehicles 𝑚 and destination vehicles 𝑛 is randomly chosen. The important simulation parameters are given as follows [22,23]. The carrier frequency is 2 GHz, the per-RB bandwidth is 1 MHz, the vehicle antenna height is 1.5 m, the vehicle antenna gain is 3 dBi, the vehicle receiver noise figure is 9 dB, the shadowing distribution is log-normal, the fast fading is Rayleigh, the pathloss model is LOS in WINNER + B1, the shadowing standard deviation is 3 dB, and the noise power 𝑁 0 is −114 dBm.…”
Section: P Ementioning
confidence: 99%
“…Based on the channel state information, such as interference and physical distance, the effect function is developed to game the content distribution vehicles with different connection times, thereby minimizing the average network delay. However, most of these algorithms are based on channel state information, and using methods such as deep learning and game theory [17][18][19], they cannot effectively solve the problems of increasing service interruption probability and low efficiency of content delivery that is caused by unstable communication links and short connection time between vehicles or between vehicles and RSUs. Therefore, resource allocation algorithms based on social attributes have been proposed and widely used in mobile networks, such as vehicular networks.…”
Section: Introductionmentioning
confidence: 99%