2018
DOI: 10.1080/00207543.2018.1535205
|View full text |Cite
|
Sign up to set email alerts
|

A reinforcement learning based approach for multi-projects scheduling in cloud manufacturing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 57 publications
(14 citation statements)
references
References 38 publications
0
12
0
Order By: Relevance
“…proposed a bio-inspired reference architecture for the cyberphysical production system to achieve the self-organization of the production system [48]. Chen et al proposed a reinforcement learning-based task-assigning policy to support multi-projects scheduling in the CMfg system [49].…”
Section: Autonomous Collaboration In Manufacturingmentioning
confidence: 99%
“…proposed a bio-inspired reference architecture for the cyberphysical production system to achieve the self-organization of the production system [48]. Chen et al proposed a reinforcement learning-based task-assigning policy to support multi-projects scheduling in the CMfg system [49].…”
Section: Autonomous Collaboration In Manufacturingmentioning
confidence: 99%
“…However, it's difficult for RL methods (e.g., Q-learning) to take high-dimensional data as inputs and solve problems with large state-action spaces. Chen et al [30] used an RL-based assigning policy to obtain the non-dominated solution set in the action space that helps yield a better performance than Q-learning. Wang et al [31] used correlated equilibrium to propose a multi-agent RL algorithm for makespan and cost optimization to guide the scheduling of multi-workflows over clouds.…”
Section: Dynamic Scheduling Under Uncertaintymentioning
confidence: 99%
“…Often the contributions present how to utilise an RL-agent to process specific operations waiting in the queue on a machine (Stricker et al 2018) or create complete production plans (Waschneck et al 2018). The application of allocation strategies in a stochastic environment combined with intensified learning can achieve up to 32% improvement in individual cases compared to conventional methods (Chen, Fang, and Tang 2019). Furthermore, an alternative machine can be selected in case of failure (Zhao et al 2019).…”
Section: Reinforcement Learningmentioning
confidence: 99%