“…This includes a detailed evaluation of all our instance models and queries using different complexity metrics and the description of our measurement process. The selection of metrics was motivated by earlier results of [22] where the values of different metrics are compared to the execution time of different queries.…”
Section: Measurement Contextmentioning
confidence: 99%
“…It relies on synthetic models scalable to any model size, and defines both query and model manipulation steps to measure the real impact of query re-evaluation. [22] aimed to predict the query evaluation performance based on metrics of models and queries both. In the current paper, we reused these metrics on real-world models to evaluate the query engine instead of synthetic models, and while our results were largely similar, further a detailed comparison is required to analyze their usefulness.…”
Section: Software Analysis Using Generic Modeling Techniquesmentioning
confidence: 99%
“…In the case of OCL expressions, we relied on our previous experience in comparing model query tools for [22], where OCL experts were asked to verify the developed queries. Visitors were executed on both model repre- sentations, while the graph patterns (both for local search-based and incremental queries) and the OCL queries were evaluated on the EMF representation.…”
Section: Measurement Processmentioning
confidence: 99%
“…For graph patterns, we rely on metrics defined in [22]: the number of query variables and parameters, the number of edge and attribute constraints, the number of subpattern calls and the combined number of negative pattern calls and match counters NEG. It is important to note that the metrics were not calculated from the Fig.…”
a b s t r a c tContext: Program queries play an important role in several software evolution tasks like program comprehension, impact analysis, or the automated identification of anti-patterns for complex refactoring operations. A central artifact of these tasks is the reverse engineered program model built up from the source code (usually an Abstract Semantic Graph, ASG), which is traditionally post-processed by dedicated, hand-coded queries. Objective: Our paper investigates the costs and benefits of using the popular industrial Eclipse Modeling Framework (EMF) as an underlying representation of program models processed by four different general-purpose model query techniques based on native Java code, OCL evaluation and (incremental) graph pattern matching. Method: We provide in-depth comparison of these techniques on the source code of 28 Java projects using anti-pattern queries taken from refactoring operations in different usage profiles. Results: Our results show that general purpose model queries can outperform hand-coded queries by 2-3 orders of magnitude, with the trade-off of an increased in memory consumption and model load time of up to an order of magnitude. Conclusion: The measurement results of usage profiles can be used as guidelines for selecting the appropriate query technologies in concrete scenarios.
“…This includes a detailed evaluation of all our instance models and queries using different complexity metrics and the description of our measurement process. The selection of metrics was motivated by earlier results of [22] where the values of different metrics are compared to the execution time of different queries.…”
Section: Measurement Contextmentioning
confidence: 99%
“…It relies on synthetic models scalable to any model size, and defines both query and model manipulation steps to measure the real impact of query re-evaluation. [22] aimed to predict the query evaluation performance based on metrics of models and queries both. In the current paper, we reused these metrics on real-world models to evaluate the query engine instead of synthetic models, and while our results were largely similar, further a detailed comparison is required to analyze their usefulness.…”
Section: Software Analysis Using Generic Modeling Techniquesmentioning
confidence: 99%
“…In the case of OCL expressions, we relied on our previous experience in comparing model query tools for [22], where OCL experts were asked to verify the developed queries. Visitors were executed on both model repre- sentations, while the graph patterns (both for local search-based and incremental queries) and the OCL queries were evaluated on the EMF representation.…”
Section: Measurement Processmentioning
confidence: 99%
“…For graph patterns, we rely on metrics defined in [22]: the number of query variables and parameters, the number of edge and attribute constraints, the number of subpattern calls and the combined number of negative pattern calls and match counters NEG. It is important to note that the metrics were not calculated from the Fig.…”
a b s t r a c tContext: Program queries play an important role in several software evolution tasks like program comprehension, impact analysis, or the automated identification of anti-patterns for complex refactoring operations. A central artifact of these tasks is the reverse engineered program model built up from the source code (usually an Abstract Semantic Graph, ASG), which is traditionally post-processed by dedicated, hand-coded queries. Objective: Our paper investigates the costs and benefits of using the popular industrial Eclipse Modeling Framework (EMF) as an underlying representation of program models processed by four different general-purpose model query techniques based on native Java code, OCL evaluation and (incremental) graph pattern matching. Method: We provide in-depth comparison of these techniques on the source code of 28 Java projects using anti-pattern queries taken from refactoring operations in different usage profiles. Results: Our results show that general purpose model queries can outperform hand-coded queries by 2-3 orders of magnitude, with the trade-off of an increased in memory consumption and model load time of up to an order of magnitude. Conclusion: The measurement results of usage profiles can be used as guidelines for selecting the appropriate query technologies in concrete scenarios.
“…Our research group investigated the correlation between model query performance and metrics describing the queries and the models [19]. The authors of [33] use metrics to understand the main characteristics of domain-specific metamodels and to study model transformations with respect to the corresponding metamodels, and search correlations between them via analytical measures [34].…”
Custom generators of graph-based models are used in MDE for many purposes such as functional testing and performance benchmarking of modeling environments to ensure the correctness and scalability of tools. However, while existing generators may generate large models in increasing size, these models are claimed to be simple and synthetic, which hinders their credibility for industrial and research benchmarking purposes. But how to characterize a realistic model used in software and systems engineering? This question is investigated in the paper by collecting over 17 different widely used graph metrics taken from other disciplines (e.g. network theory) and evaluating them on 83 instance models originating from six modeling domains. Our preliminary results show that certain metrics are similar within a domain, but differ greatly between domains, which makes them suitable input for future instance model generators to derive more realistic models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.