2004 Australian Software Engineering Conference. Proceedings. 2004
DOI: 10.1109/aswec.2004.1290484
|View full text |Cite
|
Sign up to set email alerts
|

A framework for classifying and comparing software architecture evaluation methods

Abstract: Software architecture evaluation has been proposed as a means to achieve quality attributes such as maintainability and reliability in a system. The objective of the evaluation is to assess whether or not the architecture will lead to the desired quality attributes. Recently, there have been a number of evaluation methods proposed. There is, however, little consensus on the technical and non-technical issues that a method should comprehensively address and which of the existing methods is most suitable for a p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
112
0
9

Year Published

2006
2006
2015
2015

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 156 publications
(121 citation statements)
references
References 32 publications
0
112
0
9
Order By: Relevance
“…Early detection of problems in architectural design through evaluation techniques reduces development costs and improves the quality of systems [5,8,14]. Thus, improving the effectiveness of evaluation techniques (in terms of problem detection) for architectural design is important.…”
Section: Independent Software Architecture Review (Isar)mentioning
confidence: 99%
“…Early detection of problems in architectural design through evaluation techniques reduces development costs and improves the quality of systems [5,8,14]. Thus, improving the effectiveness of evaluation techniques (in terms of problem detection) for architectural design is important.…”
Section: Independent Software Architecture Review (Isar)mentioning
confidence: 99%
“…These questions 4 The control questions are included in the booklet (which is available at http://users.dsic.upv.es/~jagonzalez/IST/family.html) and should be answered by the participants after helped the participants to focus on understanding the patterns and metrics and allowed us to control their comprehension of the problem. These questions did not influence the experiment execution and results; their purpose was solely to control the comprehension of the patterns and metrics.…”
Section: The Second Experiments (Upv2)mentioning
confidence: 99%
“…Control questions: We included a set of control questions 4 in the experimental material in order to analyze the comprehension of the patterns and the metrics being applied. These questions 4 The control questions are included in the booklet (which is available at http://users.dsic.upv.es/~jagonzalez/IST/family.html) and should be answered by the participants after helped the participants to focus on understanding the patterns and metrics and allowed us to control their comprehension of the problem.…”
Section: The Second Experiments (Upv2)mentioning
confidence: 99%
“…Scenario-based SA evaluation methods such as ATAM, SAAM, and ALMA, are considered relatively mature and established as they have been widely applied and more rigorously validated in various domains [7]. Figure 1 shows a generic process of scenario-based SA evaluation.…”
Section: Quality Attributes and Software Architecture Evaluationmentioning
confidence: 99%
“…A number of methods, such as Architecture Tradeoff Analysis Method (ATAM) [4], Software Architecture Analysis Method (SAAM) [5] and Architecture-Level Maintainability Analysis (ALMA) [6], have been developed to evaluate the quality related issues at the SA level. Most of these methods are structurally same but there are a number of differences among their activities and techniques [7]. The accuracy of the results of these methods is largely dependent on the quality of the scenarios used for the SA evaluation as these are scenario-based methods [8].…”
Section: Introductionmentioning
confidence: 99%