Proceedings of the 2019 International Conference on Management of Data 2019
DOI: 10.1145/3299869.3300085
|View full text |Cite
|
Sign up to set email alerts
|

An End-to-End Automatic Cloud Database Tuning System Using Deep Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
154
0
2

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 221 publications
(180 citation statements)
references
References 34 publications
2
154
0
2
Order By: Relevance
“…Therefore, Cao et al [19] develop two versions of an enhanced multi-objective simulated annealing approach to solve configuration optimization problem with multiple hard constraints, while Wang et al [39] provide a control-theoretic approach to continuously tune a distributed application. Further, based on the reward function calculated with observations, an enhanced reinforcement learning approach is proposed in [20] to online tune configurations for web systems, while Zhang et al [21] design an end-to-end automatic cloud database tuning system using deep reinforcement learning. The most related work to our paper is [22].…”
Section: Black-box Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, Cao et al [19] develop two versions of an enhanced multi-objective simulated annealing approach to solve configuration optimization problem with multiple hard constraints, while Wang et al [39] provide a control-theoretic approach to continuously tune a distributed application. Further, based on the reward function calculated with observations, an enhanced reinforcement learning approach is proposed in [20] to online tune configurations for web systems, while Zhang et al [21] design an end-to-end automatic cloud database tuning system using deep reinforcement learning. The most related work to our paper is [22].…”
Section: Black-box Methodsmentioning
confidence: 99%
“…These baseline algorithms are also widely used in tuning hyper-parameters for machine learning models. Therefore, these algorithms are already widely utilized as baseline black-box optimization algorithms in previous work such as [17], [21], [30]. In the following, we provide a brief introduction of each algorithm and give a short description of their hyper-parameter settings in our experiments if necessary.…”
Section: Baseline Algorithmsmentioning
confidence: 99%
“…Table 1 compares the most relevant state-of-the-art approaches. Alvaro [16], OtterTune [1], and CDBTune [42] use several regressors to tune a set of parameters. However, they train the models in the way of maximizing one objective, i.e., predicting local optimal performance.…”
Section: Motivation and Challengesmentioning
confidence: 99%
“…In contrast, Ernest [35] is designed to predict any unknown parameters in the topology but capable of predicting only vcore parameter. Yet, these regression models either (Gaussian Process [1] and multi-layer Neural Network [42]) face a deterioration accuracy due to overfitting and require many samples for model building or (Linear regression [16], Ernest [35]) are simple but achieves unsatisfactory accuracy. We contribute to apply a novel accurate model and develop an adaptive sampling to mitigate the training cost, as follows.…”
Section: Motivation and Challengesmentioning
confidence: 99%
“…Por fim, propõe apenas ações locais, uma vez que gera ações de sintonia considerando apenas parâmetros, e nenhum outro tipo de ação de sintonia fina é considerado durante o processo de geração e seleção das ações. Esta mesma estratégia ainda foi estendida também para predizer parâmetros em ambientes virtualizados [56].…”
Section: Técnicas De Aprendizado De Máquinaunclassified