A benefit of model-driven engineering relies on the automatic generation of artefacts from high-level models through intermediary levels using model transformations. In such a process, the input must be well designed, and the model transformations should be trustworthy. Because of the specificities of models and transformations, classical software test techniques have to be adapted. Among these techniques, mutation analysis has been ported, and a set of mutation operators has been defined. However, it currently requires considerable manual work and suffers from the test data set improvement activity. This activity is a difficult and time-consuming job and reduces the benefits of the mutation analysis. This paper addresses the test data set improvement activity. Model transformation traceability in conjunction with a model of mutation operators and a dedicated algorithm allow to automatically or semi-automatically produce improved test models. The approach is validated and illustrated in two case studies written in Kermeta.kind of fault that could be introduced by programmers. The efficiency of a given test data set is then measured by its ability to highlight the fault injected in each mutated version (killing these mutants). If the proportion of killed mutants [3] is considered too low, it is necessary to improve the test data set [4].This activity corresponds to the modification of existing test data or the generation of new ones and is called test data set improvement. It is usually seen as the most time-consuming step. Experiments measure that the test data set initially provided by the tester often already detect 50% to 70% of the mutants as faulty [5]. However, several works state that improving the test set to highlight errors in 95% of mutants is difficult in most of the cases [6,7]. Indeed, each non-killed (i.e. alive) mutant must be analysed in order to understand why no test data reveals its injected fault, and consequently, the test data set has to be improved.This paper focuses on the test data set improvement of the mutation analysis process. It is dedicated to the test of model transformation. In this context, test data are models.Because of their intrinsic nature, model transformations rely on specific operations (e.g. data collection in a typed graph or collection filtering) that rarely occur in traditional programming. In addition, many different dedicated languages exist to implement model transformation. Thus, the mutation analysis techniques used for traditional programming cannot be directly applied to model transformations; new challenges to model transformation testing are arising [8]. A set of mutation operators dedicated to model transformation has been previously introduced [9]. This paper tackles the problematic of the test model set improvement by automatically considering mutation operators.Tools and heuristics are provided to assist the creation of new test models. The approach proposed in this paper relies on a high level representation of the mutation operators and a traceability ...
Model transformations are intrinsically related to modeldriven engineering. According to the increasing size of standardised metamodel, large transformations need to be developed to cover them. Several approaches promote separation of concerns in this context, that is, the definition of small transformations in order to master the overall complexity. Unfortunately, the decomposition of transformations into smaller ones raises new issues: organising the increasing number of transformations and ensuring their composition (i.e. the chaining). In this paper, we propose to use feature models to classify model transformations dedicated to a given business domain. Based on this feature models, automated techniques are used to support the designer, according to two axis: (i) the definition of a valid set of model transformations and (ii) the generation of an executable chain of model transformation that accurately implement designer's intention. This approach is validated on Gaspard2, a tool dedicated to the design of embedded system.
Abstract. Model transformation can't be directly tested using program techniques. Those have to be adapted to model characteristics. In this paper we focus on one test technique: mutation analysis. This technique aims to qualify a test data set by analyzing the execution results of intentionally faulty program versions. If the degree of qualification is not satisfactory, the test data set has to be improved. In the context of model, this step is currently relatively fastidious and manually performed. We propose an approach based on traceability mechanisms in order to ease the test model set improvement in the mutation analysis process. We illustrate with a benchmark the quick automatic identification of the input model to change. A new model is then created in order to raise the quality of the test data set.
Model transformation is one of the key practices of Model-Driven Engineering. Building very large model transformations may benefit from the construction of small transformations, in order to manage complexity and enhance reusability, maintainability and modularity. The decomposition of transformations into smaller ones raises the issue of assuring the validity of a composition: if two or more transformations are chained together, are the results of executing the chain the expected ones? This paper addresses the challenge of determining if two transformations are conflicting. Transformations can conflict in numerous ways, e.g., in terms of preconditions, postconditions, behaviour of individual rules. In this paper, we demonstrate a strong notion of conflict, via commutativity: two transformations do not conflict if they can be chained in either order, and in doing so produce identical results. We propose an approach to detecting such potential conflicts based on static analysis, exploiting an intermediate representation of transformations independent of any concrete language.
Context. Refining or altering existing behavior is the daily work of every developer, but that cannot be always anticipated, and software sometimes cannot be stopped. In such cases, unanticipated adaptation of running systems is of interest for many scenarios, ranging from functional upgrades to on-the-fly debugging or monitoring of critical applications.Inquiry. A way of altering software at run time is using behavioral reflection, which is particularly wellsuited for unanticipated adaptation of real-world systems. Partial behavioral reflection is not a new idea, and for years many efforts have been made to propose a practical way of expressing it. Many of these efforts resulted in practical solutions, but which introduced a semantic gap between the code that requires adaptation and the expression of the partial behavior.Approach. The idea of closing the gap between the code and the expression of the partial behavior led to the implementation of the Reflectivity framework. Using Reflectivity, developers annotate abstract syntax tree (AST) nodes with meta-behavior which is taken into account by the compiler to produce behavioral variations. Reflectivity is designed for dynamically typed systems which provide an AST representation of the program that is causally connected to the source code, and which support run-time recompilation. In this paper, we present Reflectivity, its API, its implementation in Pharo, its usage and limitations. We reflect on ten years of use of Reflectivity, and investigate in the literature how it has been used as a basic building block of many innovative ideas.Knowledge. Reflectivity has been used by 21 projects in the last decade, to implement reflective libraries or language extensions, code instrumentation, dynamic software update, debugging tools and software analysis tools. Our investigation shows that developers needed powerful and customized sets of heterogeneous reflection features for their projects, which Reflectivity provided. Despite its limitations, Reflectivity has proven to be a practical way of working with fine-grained reflective operations (at the AST level), and enabled a powerful way of dynamically add and modify behavior. By instrumenting through AST annotations, Reflectivity provides a flexible means to bridge the gap between the expression of the meta-behavior and the source code.Grounding. Reflectivity is actively used in research projects. During the past ten years, it served as a support for both implementation and fundamental base, for much research work including PhD theses, papers at conferences, workshops and in journals. Reflectivity is now an important library of the Pharo language, and is integrated in the standard distribution of the platform.Importance. Reflectivity exposes powerful abstractions to deal with behavioral adaptation, while providing a mature framework for unanticipated, non-intrusive, sub-method and partial behavioral reflection based on AST annotation. Finally, the AST annotation feature of Reflectivity opens new experimentation opportuni...
International audienceModel Driven Engineering (MDE) promotes models as main artifacts in software development process. Each model represents a viewpoint of a system. MDE aims to automatically generate code from an abstract model, using various intermediary models. Such a generation relies on successive model transformations shifting a source model to a target one. The resulting transformation sequence corresponds to the skeleton of an MDE based approach, similarly to compiler in traditional ones. Transformations are used many times in order to justify their development effort. If their are faulty, they can largely spread errors to models. Thus, it is indispensable to test them and possibly debug them. In this paper, we propose an error localization algorithm based on a traceability mechanism in order to ease the transformations debugging. We illustrate this approach in the context of embedded system development
Debugging is one of the most important and time consuming activities in software maintenance, yet mainstream debuggers are not well-adapted to several debugging scenarios. This has led to the research of new techniques covering specific families of complex bugs. Notably, recent research proposes to empower developers with scripting DSLs, plugin-based and moldable debuggers. However, these solutions are tailored to specific use-cases, or too costly for one-time-use scenarios.In this paper we argue that exposing a debugging scripting interface in mainstream debuggers helps in solving many challenging debugging scenarios. For this purpose, we present Sindarin, a scripting API that eases the expression and automation of different strategies developers pursue during their debugging sessions. Sindarin provides a GDB-like API, augmented with AST-bytecode-source code mappings and objectcentric capabilities. To demonstrate the versatility of Sindarin, we reproduce several advanced breakpoints and non-trivial debugging mechanisms from the literature.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.