Due to license restrictions and installation issues, it is often not feasible to experiment with software without making substantial investments. Especially in the case of legacy tools, it turns out that even free software is often too costly (i.e., time-consuming) to be installed for evaluating the quality of a research contribution. After organizing a series of events related to software modeling, we have constructed (and started to use) SHARE, a system for sharing practically any type of software artifact to reviewers and to other participants who have very limited time available. The system relies on cloud-computing technologies to provide online access to interactive environments containing all the tools, documentation, input and output models to reproduce alleged research results. The system also enables one to clone such an environment and add additional models or tools in order to extend a contribution or pinpoint a problem. In retrospect, we observe that the approach is not limited to software modeling and SHARE is in fact gaining acceptance in other fields already.
Evaluating software related research: a call to armsThe amount of research contributions that rely on software is increasing. Especially when the contribution itself consists of an algorithm or information system, the results should at least be available for peer review and ideally even for reproduction by the complete research community. Section 1.1 describes how reproducibility problems in the graph transformation community have triggered our work on SHARE. Section 1.2 outlines the different levels of reproducibility that can be observed in practice, independently of that research domain. The subsequent sections introduce our key solution to the underlying problems. More specifically, they describe why we are applying cloud-computing technologies, how we are integrating them, and how others can use our supportive information system-SHARE (Sharing Hosted Autonomous Research Environments [49]).