2022
DOI: 10.1007/s11227-022-04747-2
|View full text |Cite
|
Sign up to set email alerts
|

Optimized task scheduling and preemption for distributed resource management in fog-assisted IoT environment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 21 publications
(7 citation statements)
references
References 39 publications
0
7
0
Order By: Relevance
“…In the IoT-fog network, a task scheduling method was proposed to allocate resources to IoT tasks, which optimally selects the best resources to execute the tasks [41,42]. Ranumayee et al [43] used the evolutionary learning method to optimize energy, makespan, and cost and schedule tasks in the IoT-fog-cloud network.…”
Section: Related Workmentioning
confidence: 99%
“…In the IoT-fog network, a task scheduling method was proposed to allocate resources to IoT tasks, which optimally selects the best resources to execute the tasks [41,42]. Ranumayee et al [43] used the evolutionary learning method to optimize energy, makespan, and cost and schedule tasks in the IoT-fog-cloud network.…”
Section: Related Workmentioning
confidence: 99%
“…During task allocation, the execution strategy corresponding to the solution x needs to be solved to meet the dependencies between tasks and environmental parameters, that is, the solution obtained should be in the feasible space. In the research of task allocation, the existing deep reinforcement learning methods usually regard it as an end-to-end learning task, and have designed different models and training methods [11][12][13][14][15][16][17]. However, by exploring each step of action, the model will not get a reward function value for completing the task until the whole scheduling task is completed, resulting in sparse rewards, large state space, and difficulty in training.…”
Section: Graph Convolution Fusion Scheduling Modelmentioning
confidence: 99%
“…However, the subsequent effect on response time was not elaborated. A task scheduling and preemption model (OSCAR) used clustering, heap-based optimizer, and a deep Q-network to decrease response time, makespan, waiting time, and SLA violations in [53]. It also improved system throughput while satisfying deadlines.…”
Section: ) Reinforcement Learning (Rl) Based Algorithmsmentioning
confidence: 99%