1993
DOI: 10.1016/0377-2217(93)90182-m
|View full text |Cite
|
Sign up to set email alerts
|

Benchmarks for basic scheduling problems

Abstract: In this paper, we propose 260 scheduling problems whose size is greater than that of the rare examples published. Such sizes correspond to real dimensions of industrial problems.The types of problems that we propose are : the permutation flow shop, the job shop and the open shop scheduling problems.We restrict us to basic problems : the processing times are fixed, there are neither set-up times nor due dates nor release dates, etc. Then, the objective is the minimization of the makespan.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

4
1,037
0
63

Year Published

2001
2001
2014
2014

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 1,995 publications
(1,104 citation statements)
references
References 6 publications
4
1,037
0
63
Order By: Relevance
“…In particular, we consider a subset of the problems proposed by Taillard [19], consisting of four families of problems of sizes 4 × 4, 5 × 5, 7 × 7 and 10 × 10 and where each family contains 10 problem instances. From each of these crisp problem instances, we generate 10 fuzzy instances, so we have 400 fuzzy problem instances in total.…”
Section: Resultsmentioning
confidence: 99%
“…In particular, we consider a subset of the problems proposed by Taillard [19], consisting of four families of problems of sizes 4 × 4, 5 × 5, 7 × 7 and 10 × 10 and where each family contains 10 problem instances. From each of these crisp problem instances, we generate 10 fuzzy instances, so we have 400 fuzzy problem instances in total.…”
Section: Resultsmentioning
confidence: 99%
“…The tuning benchmark set contains the instances with 20 machines from the tuning benchmark set proposed by [9]. The testing benchmark set is an adaptation of the benchmark set proposed by Taillard [18], following what is traditionally done in the multi-objective PFSP literature [9,17]. Once the algorithms are tuned, we run them 10 times on each test instance, compute the average hypervolume, and compare the results using rank sum analysis and parallel coordinate plots.…”
Section: Methodsmentioning
confidence: 99%
“…For the algorithms considered, the overall performance is assessed using a set of benchmark problems totaling 120 in number proposed by Taillard [12], and 250 in number proposed by Ruben Ruiz (2009). The processing time varies from 1 to 99 units and generated using a random number generator for a given seed.…”
Section: Heuristic Algorithms Analyzedmentioning
confidence: 99%