2022
DOI: 10.1016/j.eswa.2022.117380
|View full text |Cite
|
Sign up to set email alerts
|

A reinforcement learning based RMOEA/D for bi-objective fuzzy flexible job shop scheduling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 86 publications
(16 citation statements)
references
References 54 publications
0
16
0
Order By: Relevance
“…In order to better illustrate the performance of the proposed ESPEA, the performances of other excellent multi-objective algorithms are compared with that of the proposed algorithm. These comparison algorithms include ROMA/D [24], MOEA/D [25], SPEA2 [19], NSGAII [26] and NSGAIII [27], all of which have been proven to have an excellent performance. In order to fairly compare the solving ability of different algorithms, all algorithms used the same cooperative initialization strategy and genetic operators and all algorithms were run 10 independent times.…”
Section: Performance Analysis Via An Algorithm Comparisonmentioning
confidence: 99%
“…In order to better illustrate the performance of the proposed ESPEA, the performances of other excellent multi-objective algorithms are compared with that of the proposed algorithm. These comparison algorithms include ROMA/D [24], MOEA/D [25], SPEA2 [19], NSGAII [26] and NSGAIII [27], all of which have been proven to have an excellent performance. In order to fairly compare the solving ability of different algorithms, all algorithms used the same cooperative initialization strategy and genetic operators and all algorithms were run 10 independent times.…”
Section: Performance Analysis Via An Algorithm Comparisonmentioning
confidence: 99%
“…This algorithm integrates a new multi-objective optimization algorithm model for environment selection. Li et al [35] proposed a fully active scheduling decoding to reduce energy consumption in the flexible job scheduling problem.…”
Section: Related Workmentioning
confidence: 99%
“…However, operator selection is critical to MA performance. Existing MA frameworks always cycle through global search followed by local search, and random selection [33] , polling selection [35] , and confidence-based selection [36,37] are used for operator selection. All of these studies inherently rely on confidence levels.…”
Section: Introductionmentioning
confidence: 99%
“…The two scheduling algorithms named as Threshold based Task scheduling algorithm (TBTS) and Service level agreement-based Load Balancing (SLA-LB) algorithm [30], aiming at makespan, gain cost, and resource utilization, and supporting different configurations of VMs for task scheduling while also dynamically scheduling according to user requirements. The MOEA/D based on RL (RMORA/D) algorithm [31] uses the classical MOEA/D algorithm combined with the Q-learning algorithm in RL for adaptive selection of domain parameters T with the goal of makespan and machine load, increasing the diversity of populations.…”
Section: Related Workmentioning
confidence: 99%