2016
DOI: 10.1109/mwc.2016.7553036
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement learning for resource provisioning in the vehicular cloud

Abstract: This article presents a concise view of vehicular clouds that incorporates various vehicular cloud models, which have been proposed, to date. Essentially, they all extend the traditional cloud and its utility computing functionalities across the entities in the vehicular ad hoc network (VANET). These entities include fixed road-side units (RSUs), on-board units (OBUs) embedded in the vehicle and personal smart devices of the driver and passengers. Cumulatively, these entities yield abundant processing, storage… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
36
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 93 publications
(36 citation statements)
references
References 15 publications
0
36
0
Order By: Relevance
“…The proposed scheme is divided into two stages, which adapt to large time scale factors, such as the traffic density, and small timescale factors, such as channel and queue states, respectively. In [76], the resource allocation problem in vehicular clouds has been modeled as an MDP and reinforcement learning is leveraged to solve the problem such that the resources are dynamically provisioned to maximize long-term benefits for the network and avoid myopic decision making. Joint management of networking, caching, and computing resources in virtualized vehicular networks has been further considered in [77], where a novel deep reinforcement learning approach has been proposed to deal with the highly complex joint resource optimization problem and shown to achieve good performance in terms of total revenues for the virtual network operators.…”
Section: ) Virtual Resource Allocationmentioning
confidence: 99%
“…The proposed scheme is divided into two stages, which adapt to large time scale factors, such as the traffic density, and small timescale factors, such as channel and queue states, respectively. In [76], the resource allocation problem in vehicular clouds has been modeled as an MDP and reinforcement learning is leveraged to solve the problem such that the resources are dynamically provisioned to maximize long-term benefits for the network and avoid myopic decision making. Joint management of networking, caching, and computing resources in virtualized vehicular networks has been further considered in [77], where a novel deep reinforcement learning approach has been proposed to deal with the highly complex joint resource optimization problem and shown to achieve good performance in terms of total revenues for the virtual network operators.…”
Section: ) Virtual Resource Allocationmentioning
confidence: 99%
“…Applying RL in SDN One main challenge SDN faces arises from highly-dynamic traffic patterns, which motivate a requirement for the network to be reconfigured frequently. It has been demonstrated that RL is an ideal tool to accomplish such a task [69,48,46,38,77,68,51,27,40,20]. For example, Salahuddin et al [69] propose a roadside unit (RSU) cloud to enhance traffic flow and road safety.…”
Section: Software-defined Networkingmentioning
confidence: 99%
“…This on-board computing facility in today's vehicles is spurring emerging applications such as infotainment and comfort applications (e.g. on-board Internet access, vehicle conditions for maintenance update, traffic congestion live information) [41,48].…”
Section: Introductionmentioning
confidence: 99%
“…With the technological advancements of today's OBU installed in vehicles, many researchers envision that vehicles may form a Fog computing facility for supporting different applications in the network access segment [8,19,41]. Implementation of vehicles as a fog node has developed a novel approach of Fog computing called Vehicular Fog-Computing (VFC).…”
Section: Introductionmentioning
confidence: 99%