Proceedings. 27th International Conference on Software Engineering, 2005. ICSE 2005.
DOI: 10.1109/icse.2005.1553572
|View full text |Cite
|
Sign up to set email alerts
|

Main effects screening: a distributed continuous quality assurance process for monitoring performance degradation in evolving software systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
24
0

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(24 citation statements)
references
References 11 publications
(3 reference statements)
0
24
0
Order By: Relevance
“…We evaluated the main effects screening process via several industrial strength feasibility studies on ACE+TAO. Our results indicate that main effects screening can reliably and accurately detect key sources of performance degradation in large-scale systems with significantly less effort than conventional techniques [Skoll05].…”
Section: Ensuring Coherency and Reducing Redundancy In Qa Activitiesmentioning
confidence: 87%
See 1 more Smart Citation
“…We evaluated the main effects screening process via several industrial strength feasibility studies on ACE+TAO. Our results indicate that main effects screening can reliably and accurately detect key sources of performance degradation in large-scale systems with significantly less effort than conventional techniques [Skoll05].…”
Section: Ensuring Coherency and Reducing Redundancy In Qa Activitiesmentioning
confidence: 87%
“…For example, our configuration model essentially defines a combinatorial object against which a wide variety of statistical tools can be applied. In another case study [Skoll05], we leveraged this feature to develop a DCQA process called main effects screening for monitoring performance degradations in evolving systems.…”
Section: Ensuring Coherency and Reducing Redundancy In Qa Activitiesmentioning
confidence: 99%
“…The results indicated that (1) this process cheaply and correctly identifies the subset of options that are most important to system performance, (2) monitoring only these selected options can detect performance degradation quickly with an acceptable level of effort, and (3) alternative strategies with equivalent effort yield less reliable results. See [32,33] for more details. An interesting aspect of this approach is that by computing the key effects before changes occur, we cut down total benchmarking time from 2 days to 5 minutes, which is fast enough to make this part of the source code check-in process.…”
Section: Summary Of Prior Workmentioning
confidence: 99%
“…In prior work [13,24,34], we developed a prototype DCQA environment called Skoll [27] that improves upon earlier system approaches described in Section 1.1. In particular, Skoll provides an Intelligent Steering Agent (ISA) that guides the QA process across large configuration spaces by decomposing QA analyses (such as anomaly detection, QoS evaluation, and integration testing QA processes) into multiple tasks and then distributing/executing these tasks continuously across a grid of computing resources contributed by end-users and distributed developers around the world.…”
Section: Summary Of Prior Workmentioning
confidence: 99%
“…It could contain other sub-components, but such architectural information would not be used in this focus area. In our own research we have started by developing a system and approach currently called QUASI (Quality as a Service Infrastructure) [11,5,6,9]. QUASI's analytical cornerstone is a model of the design space that implicitly captures all configurations on which test jobs might run.…”
Section: Testing Individual Componentsmentioning
confidence: 99%