Design is often raised in the literature as important to attaining various properties and characteristics in a software system. At least for open-source projects, it can be hard to find evidence of ongoing design work in the technical artifacts produced as part of the development. Although developers usually do not produce specific design documents, they do communicate about design in different ways. In this paper, we provide quantitative evidence that developers address design through discussions in commits, issues, and pull requests. To achieve this, we built a discussions' classifier and automatically labeled 102,122 discussions from 77 projects. Based on this data, we make four observations about the projects: i) on average, 25% of the discussions in a project are about design; ii) on average, 26% of developers contribute to at least one design discussion; iii) only 1% of the developers contribute to more than 15% of the discussions in a project; and iv) these few developers who contribute to a broad range of design discussions are also the top committers in a project.
Assuring that a program conforms to its specification is a key concern in software quality assurance. Although there is substantial tool support to check whether an implementation complies to its functional requirements, checking whether it conforms to its design remains as an almost completely manual activity. In this paper, we present the concept of design tests, which are test-like programs that automatically check whether an implementation conforms to a specific design rule. Design rules are implemented directly in the target programming language in the form of tests. As a proof of concept, we present DesignWizard, an API developed to support design tests for Java programs as JUnit test cases. We applied design tests in two case studies and observed that our approach is suitable to check conformance automatically. Moreover, we observed that designers and programmers appreciate design tests as an executable documentation that can be easily kept up to date. Overview and MotivationChecking conformance between implementation and design rules is an important activity to guarantee quality on source code. One process widely used by software companies is Design Review [9], which consists of a mechanism to find errors in the design and its representation by splitting the team into groups and making them analyze each others' code. The problem with Design Review, being a manual process, that can lead the team to errors during the analysis. Moreover, this process does not scale, since analyzing a number of classes may take several hours.There have been several attempts to check conformance between code and design rules automatically. ArchJava [1], for example, proposes an extension of Java programming language that aims at ensuring conformance between implementation and architectural constraints specified using the concepts of ports and components introduced by the authors.Fairbanks et al. [4] use the term design fragment to refer to patterns that describe how a program interacts with frameworks. Design fragments describe what programmers should build to accomplish some goals and which are the relevant parts of the framework that the programmer's code will interact. The authors have created a catalog of design fragments and also mechanisms to implement (using XML) design fragments to check conformance between application and those design fragments.Another related work [8] describes a software reflexion model technique to help engineers to perform various software engineering tasks by exploiting the drift between design and implementation. This approach is interactive and the user has to specify the expected model, extract the real model from the source code using static analysis and then, describe a mapping between the extracted source model and the stated high-level structural model.Despite the existence of several academic approaches to check conformance between code and design rules, the gap between the state of the art and the state of the practice has become most apparent. We believe one of the factors responsible f...
Over time, source code tends to drift from the intended software architecture, often resulting in loss of desired software qualities. To help keep code aligned with intended architecture, the developers of core parts of the open source Eclipse platform introduced a tool to express and check architectural rules. We analyze five years of Eclipse architecture checking reports produced by the tool. We describe the kinds of rules the developers found helpful to check, how code diverges from intended architecture, and how developers deal with architectural violations over time.
Verifying whether a software meets its functional requirements plays an important role in software development. However, this activity is necessary, but not sufficient to assure software quality. It is also important to check whether the code meets its design specification. Although there exists substantial tool support to assure that a software does what it is supposed to do, verifying whether it conforms to its design remains as an almost completely manual activity. In a previous work, we proposed design tests -test-like programs that automatically check implementations against design rules. Design test is an application of the concept of test to design conformance checking. To support design tests for Java projects, we developed DesignWizard, an API that allows developers to write and execute design tests using the popular JUnit testing framework. In this work, we present a study on the usability and scalability of DesignWizard to support structural conformance checking through design tests. We conducted a qualitative usability evaluation of DesignWizard using the Think Aloud Protocol for APIs. In the experiment, we challenged eleven developers to compose design tests for an open-source software project. We observed that the API meets most developers' expectations and that they had no difficulties to code design rules as design tests. To assess its scalability, we evaluated DesignWizard's use of CPU time and memory consumption. The study indicates that both are linear functions of the size of software under verification. Keywords-978-1-4577-0664-6/11/$26.00 ©2011 IEEE
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.