2019 45th Euromicro Conference on Software Engineering and Advanced Applications (SEAA) 2019
DOI: 10.1109/seaa.2019.00032
|View full text |Cite
|
Sign up to set email alerts
|

Exploratory Performance Testing Using Reinforcement Learning

Abstract: Performance bottlenecks resulting in high response times and low throughput of software systems can ruin the reputation of the companies that rely on them. Almost two-thirds of performance bottlenecks are triggered on specific input values. However, finding the input values for performance test cases that can identify performance bottlenecks in a large-scale complex system within a reasonable amount of time is a cumbersome, cost-intensive, and time-consuming task. The reason is that there can be numerous combi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
29
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 16 publications
(30 citation statements)
references
References 18 publications
(24 reference statements)
0
29
0
Order By: Relevance
“…In our previous work [9], the selection of actions was restricted to a finite discrete action space, which means that at every time step t, the agent could either increase or decrease the value of a single input in the current state by a fixed amount in order to get the next state. Therefore, the agent had to go through numerous unrewarding states in order to get to the rewarding regions of the input space, which resulted in decreasing the overall bottleneck detection rate of the approach.…”
Section: ) Action Spacementioning
confidence: 99%
See 4 more Smart Citations
“…In our previous work [9], the selection of actions was restricted to a finite discrete action space, which means that at every time step t, the agent could either increase or decrease the value of a single input in the current state by a fixed amount in order to get the next state. Therefore, the agent had to go through numerous unrewarding states in order to get to the rewarding regions of the input space, which resulted in decreasing the overall bottleneck detection rate of the approach.…”
Section: ) Action Spacementioning
confidence: 99%
“…Table 1 lists several examples of the possible actions that the agent can select and how they modify the current state in order to produce the next state. In our previous work [9], we addressed only integer input parameters, but iPerfXRL can be applied to float inputs without any modification. Furthermore, iPerfXRL can easily be extended to support other types of inputs (e.g., string, categorical) by modifying the action and the input space accordingly.…”
Section: ) Action Spacementioning
confidence: 99%
See 3 more Smart Citations