2016
DOI: 10.1007/978-3-662-53455-7_5
|View full text |Cite
|
Sign up to set email alerts
|

Regularized Cost-Model Oblivious Database Tuning with Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(10 citation statements)
references
References 30 publications
0
8
0
Order By: Relevance
“…Scholars also apply reinforcement learning to solve the problems on other purposes of the data center task scheduling. Basu et al applied reinforcement learning to build cost-models on standard online transaction processing datasets [22]. ey modeled the execution of queries and updates as a Markov decision process whose states are database configurations, actions are configuration changes, and rewards are functions of the cost of configuration change and query and update evaluation.…”
Section: Reinforcement Learning-based Studiesmentioning
confidence: 99%
“…Scholars also apply reinforcement learning to solve the problems on other purposes of the data center task scheduling. Basu et al applied reinforcement learning to build cost-models on standard online transaction processing datasets [22]. ey modeled the execution of queries and updates as a Markov decision process whose states are database configurations, actions are configuration changes, and rewards are functions of the cost of configuration change and query and update evaluation.…”
Section: Reinforcement Learning-based Studiesmentioning
confidence: 99%
“…Past efforts have considered more general reinforcement learning (RL) for physical design tuning [25], [35]. Compared to most MAB approaches, deep RL invites over parameterisation, which can slow convergence (see Figure 8), whereas MAB typically provides better convergence, simpler implementation, and safety guarantees via strategic exploration and knowledge transfer (see Section III).…”
Section: Why Not (General) Reinforcement Learning?mentioning
confidence: 99%
“…Furthermore, this classifier incurs up to 10% recommendation time, impacting recommendation cost in all cases, especially where recommendation cost already dominates the cost for PDTool (TPC-DS, IMDb). When it comes to tuning, the closest approaches employ variants of RL index selection or partitioning [25], [35], [46] or configuration tuning [5]. [35] describes RL-based index selection, which depends solely on the recommendation tool for query-level recommendations and is affected by decision combinatorial explosion, both issues addressed in our work.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Specifically, unsupervised ML techniques can model the data distribution for cardinality estimation (CardEst) [14,39,41,42,46] and indexing [6,7,18,27]; supervised ML models can replace the cost estimator (CostEst) [25,34,35] and execution scheduler [23,31]; and reinforcement learning methods solve decision making # The second and third authors contribute equally to this paper. problems such as configuration tuning [1,20,44] and join order selection (JoinSel) [12,22,24,29,43]. Motivation: Despite these ML methods' promising results on each individual task, the existing ML techniques in DBMS do not explore the following transferabilities and will inevitably lead to impractical solutions and/or ineffective models.…”
Section: Introductionmentioning
confidence: 99%