2012
DOI: 10.1007/bf03323525
|View full text |Cite
|
Sign up to set email alerts
|

Assessing the Quality of Model Differencing Engines

Abstract: In recent years many tools and algorithms for model comparison and differencing were proposed. Typically, the main focus of the research laid on being able to compute the difference in the first place. Only very few papers addressed the quality of the delivered differences sufficiently. Hence, this is a general shortcoming in the state-of-the-art. Currently, there are no established community standards how to assess the quality of differences and it is neither possible to compare the quality of different algor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 5 publications
0
4
0
Order By: Relevance
“…The concept of quality in model comparison algorithms is relative to the model management activity these are integrated into, and the evaluation criteria used to assess their results depend on the specific use case [31]. For example, low-level difference representations might result convenient in semi-automated workflows, e.g.…”
Section: Evaluation Of Comparison Qualitymentioning
confidence: 99%
“…The concept of quality in model comparison algorithms is relative to the model management activity these are integrated into, and the evaluation criteria used to assess their results depend on the specific use case [31]. For example, low-level difference representations might result convenient in semi-automated workflows, e.g.…”
Section: Evaluation Of Comparison Qualitymentioning
confidence: 99%
“…Benchmarking algorithms is considered to be an area where further research is required in this field [8,123]. Having standardized benchmark model datasets that can be used to evaluate algorithms is beneficial to researchers, as it is then possible to compare different algorithms objectively, regardless of their inner workings or the technology used to implement the algorithm.…”
Section: Evaluation Techniques (Rq2)mentioning
confidence: 99%
“…The question of how we can measure the quality of model differences can arise, i.e., what can be considered a "good" model difference [123]. There is also a clear need of standardized benchmark model sets [8,123], as we have discussed in Sect. 4.2. We examined the differences between the matching of structural and behavioral models in Sect.…”
Section: Open Questions (Rq3)mentioning
confidence: 99%
“…In addition, there are no test cases for testing different capabilities of these systems. There have been some proposals since then, but there is still research to be done in this area [74].…”
Section: • Inconsistency Detection This Phase Focuses On Thementioning
confidence: 99%