2016
DOI: 10.1016/j.knosys.2016.01.031
|View full text |Cite
|
Sign up to set email alerts
|

A tensor based hyper-heuristic for nurse rostering

Abstract: Nurse rostering is a well-known highly constrained scheduling problem requiring assignment of shifts to nurses satisfying a variety of constraints. Exact algorithms may fail to produce high quality solutions, hence (meta)heuristics are commonly preferred as solution methods which are often designed and tuned for specific (group of) problem instances. Hyper-heuristics have emerged as general search methodologies that mix and manage a predefined set of low level heuristics while solving computationally hard prob… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0
1

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 39 publications
(28 citation statements)
references
References 45 publications
(54 reference statements)
0
27
0
1
Order By: Relevance
“…Which method to use for performance comparison of hyper-heuristics across multiple problem domains (distributions/benchmarks) and how the performance comparison should be done are still open issues in the hyper-heuristic research. Currently, there are two commonly used metrics in the area: Formula 1 ranking ( [9], [12]) and µ norm ( [23], [24]). In this work, we preferred the latter one (details are in Section IV-B) which is a more informative metric taking into account of the mean performance of algorithms using normalised performance indicator values over a given number of trials on the instances from multiple problem domains/benchmarks.…”
Section: Background a Related Work On Multiobjective Selection Hmentioning
confidence: 99%
See 2 more Smart Citations
“…Which method to use for performance comparison of hyper-heuristics across multiple problem domains (distributions/benchmarks) and how the performance comparison should be done are still open issues in the hyper-heuristic research. Currently, there are two commonly used metrics in the area: Formula 1 ranking ( [9], [12]) and µ norm ( [23], [24]). In this work, we preferred the latter one (details are in Section IV-B) which is a more informative metric taking into account of the mean performance of algorithms using normalised performance indicator values over a given number of trials on the instances from multiple problem domains/benchmarks.…”
Section: Background a Related Work On Multiobjective Selection Hmentioning
confidence: 99%
“…The chosen heuristics logically form a chain of a heuristic sequence as the search progresses. Although there are previous studies ( [9], [48], [57], [58], [59]) using some notion of transition probabilities to keep track of the performance of heuristics invoked successively, none of them employed the same reinforcement learning scheme as we proposed. More importantly, all the previously mentioned algorithms were tested on single objective optimisation problems under a single point based search framework managing move operators rather than metaheuristics.…”
Section: B Reinforcement Learning Schemementioning
confidence: 99%
See 1 more Smart Citation
“…While this approach relies on a C4.5 algorithm, in [98] it is improved by using a multilayer perceptron. Another approach is presented in [99], where a tensor-based online learning selection hyperheuristic is designed for nurse rostering. The proposed approach consists of the consecutive iteration of four stages: during the first and second stage, two tensors are constructed considering different heuristic selection and move acceptance methods; at the end of the second stage, each tensor is subjected to factorization and, using the information of both tensors, the heuristic space is partitioned; the third is a parameter control phase for the heuristics; and the final stage performs the search switching between heuristics periodically, using appropriate heuristic parameter values.…”
Section: Global Hybridizationsmentioning
confidence: 99%
“…To choose more reliable parameter values and make the optimisation process more automated, as shown in interaction 4, optimisation techniques have been introduced to improve a base optimisation algorithm, such as using a metaoptimisation algorithm at the high level to tune the parameters of a base algorithm [49] or embedding these parameters into a solution representation and adaptively altering them together with the solution [128]. From another perspective, ML techniques have also been introduced to tackle the same issues [11,195], which belongs to interaction 2. A typical optimisation process generates plenty of data including solution states, associated objective values, etc.…”
Section: Introductionmentioning
confidence: 99%