2019
DOI: 10.1109/lcomm.2019.2941482
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Armed Bandit Learning for Cache Content Placement in Vehicular Social Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(4 citation statements)
references
References 13 publications
0
4
0
Order By: Relevance
“…Okada et al [ 37 ] calculate the communication volume between users and caching nodes from the access probability and the number of communication hops, and select the caching location based on the communication volume. On the Internet of Vehicles (IoV), Bitaghsir et al [ 38 ] proposed a caching placement algorithm based on Multi-Armed Bandit Learning, which selects the content to be cached in the roadside unit (RSU) according to the content popularity, and then uses the user social characteristics to select the optimal caching path. This algorithm effectively reduces the load of each caching node.…”
Section: Placement Optimizationmentioning
confidence: 99%
“…Okada et al [ 37 ] calculate the communication volume between users and caching nodes from the access probability and the number of communication hops, and select the caching location based on the communication volume. On the Internet of Vehicles (IoV), Bitaghsir et al [ 38 ] proposed a caching placement algorithm based on Multi-Armed Bandit Learning, which selects the content to be cached in the roadside unit (RSU) according to the content popularity, and then uses the user social characteristics to select the optimal caching path. This algorithm effectively reduces the load of each caching node.…”
Section: Placement Optimizationmentioning
confidence: 99%
“…The influence of social relations on task unloading in a vehicle cloud network was studied to improve the link stability, thus addressing the vehicular QoS demand. Bitaghsir et al [21] proposed a resource allocation algorithm based on the robber 2 study that considered the impact of centrality as a social attribute on content delivery between vehicles to maximize the probability of requesting vehicles to download cache data and improve the efficiency of content delivery successfully.…”
Section: Introductionmentioning
confidence: 99%
“…These studies seldom consider the uncertainty of users' behaviours, so this paper introduces an online learning method called multi-armed bandits (MAB) to solve the problem. MAB has shown effectiveness and merit in air conditioning demand aggregation [16] and many other sequential decisionmaking problems containing uncertain/unknown behavioural factors [17][18][19][20][21][22][23][24][25][26][27]. In reference [28], an adversarial MAB framework is applied to learn the signal response of thermal control loads for demand response in real-time.…”
Section: Introductionmentioning
confidence: 99%