2020
DOI: 10.1609/aaai.v34i04.5721
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Optimize Computational Resources: Frugal Training with Generalization Guarantees

Abstract: Algorithms typically come with tunable parameters that have a considerable impact on the computational resources they consume. Too often, practitioners must hand-tune the parameters, a tedious and error-prone task. A recent line of research provides algorithms that return nearly-optimal parameters from within a finite set. These algorithms can be used when the parameter space is infinite by providing as input a random sample of parameters. This data-independent discretization, however, might miss pockets of ne… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4
1

Relationship

3
7

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 11 publications
0
5
0
Order By: Relevance
“…We plot a similar curve for the test performance of the oracle algorithm selector which always selects the optimal parameter setting from the portfolio. Specifically, for each portfolio size κ ∈ [10], let f * κ be the oracle algorithm selector f * κ (z) = argmin ρ 1 ,...,ρκ u ρ i (z). Given a test set S t ∼ D Nt , we define the average test performance of f * κ as…”
Section: Methodsmentioning
confidence: 99%
“…We plot a similar curve for the test performance of the oracle algorithm selector which always selects the optimal parameter setting from the portfolio. Specifically, for each portfolio size κ ∈ [10], let f * κ be the oracle algorithm selector f * κ (z) = argmin ρ 1 ,...,ρκ u ρ i (z). Given a test set S t ∼ D Nt , we define the average test performance of f * κ as…”
Section: Methodsmentioning
confidence: 99%
“…Our guarantees are configuration-procedure-agnostic: no matter how one tunes the parameters using the training set, we bound the difference between the resulting parameter setting's performance on average over the training set and its expected performance on unseen instances. A related line of research has provided learning-based algorithm configuration procedures with provable guarantees [12,27,28,44,45]. Unlike the results in this paper, their guarantees are not configuration-procedure-agnostic: they apply to the specific configuration procedures they propose.…”
Section: Introductionmentioning
confidence: 92%
“…However, these methods are typically limited to specific CO problems in which a heuristic solution can be easily constructed, and scaling to large-size instances is an issue. On the other hand, since a wide range of constrained CO problems can be formulated into a MIP model, there has also been increasing interest in learning decision rules to improve MIP algorithms [7,[18][19][20][21][22]. While it is shown that this direction has a great potential to improve the state-of-the-art of MIP algorithms, convincing generalization performances, and transfer learning across instances have not been fully tackled yet.…”
Section: Representation Learning For Comentioning
confidence: 99%