Evaluation of design artefacts and design theories is a key activity in Design Science Research (DSR), as it provides feedback for further development and (if done correctly) assures the rigour of the research. However, the extant DSR literature provides insufficient guidance on evaluation to enable Design Science Researchers to effectively design and incorporate evaluation activities into a DSR project that can achieve DSR goals and objectives. To address this research gap, this research paper develops, explicates, and provides evidence for the utility of a Framework for Evaluation in Design Science (FEDS) together with a process to guide design science researchers in developing a strategy for evaluating the artefacts they develop within a DSR project. A FEDS strategy considers why, when, how, and what to evaluate. FEDS includes a two-dimensional characterisation of DSR evaluation episodes (particular evaluations), with one dimension being the functional purpose of the evaluation (formative or summative) and the other dimension being the paradigm of the evaluation (artificial or naturalistic). The FEDS evaluation design process is comprised of four steps: (1) explicate the goals of the evaluation, (2) choose the evaluation strategy or strategies, (3) determine the properties to evaluate, and (4) design the individual evaluation episode(s). The paper illustrates the framework with two examples and provides evidence of its utility via a naturalistic, summative evaluation through its use on an actual DSR project.
Abstract. Evaluation is a central and essential activity in conducting rigorousDesign Science Research (DSR), yet there is surprisingly little guidance about designing the DSR evaluation activity beyond suggesting possible methods that could be used for evaluation. This paper extends the notable exception of the existing framework of Pries-Heje et al [11] to address this problem. The paper proposes an extended DSR evaluation framework together with a DSR evaluation design method that can guide DSR researchers in choosing an appropriate strategy for evaluation of the design artifacts and design theories that form the output from DSR. The extended DSR evaluation framework asks the DSR researcher to consider (as input to the choice of the DSR evaluation strategy) contextual factors of goals, conditions, and constraints on the DSR evaluation, e.g. the type and level of desired rigor, the type of artifact, the need to support formative development of the designed artifacts, the properties of the artifact to be evaluated, and the constraints on resources available, such as time, labor, facilities, expertise, and access to research subjects. The framework and method support matching these in the first instance to one or more DSR evaluation strategies, including the choice of ex ante (prior to artifact construction) versus ex post evaluation (after artifact construction) and naturalistic (e.g., field setting) versus artificial evaluation (e.g., laboratory setting). Based on the recommended evaluation strategy(ies), guidance is provided concerning what methodologies might be appropriate within the chosen strategy(ies).
This paper proposes and evaluates a soft systems approach to design science research. Soft Design Science provides an approach to the development of new ways to improve human organizations, especially with consideration for social aspects, through the activities of design, development, instantiation, evaluation and evolution of a technological artifact. The Soft Design Science approach merges the common design science research process (design, build-artifact, evaluation) together with the iterative soft systems methodology. The design-build artifactevaluation process is iterated until the specific requirements are met. The generalized requirements are adjusted as the process continues to keep alignment with the specific requirements. In the end, the artifact represents a general solution to a class of problems shown to operate in one instance of that class of problems. The proposed methodology is evaluated by an analysis of how it differs from, and could have informed and improved, a published design science study, which used a design-oriented action research method.
Over the past decade design science research (DSR) has re-emerged as an important research paradigm in information systems. However, the current recommended approaches to conducting design science research do not consider ethics. Hence the purpose of this paper is to begin a debate about the need for ethical principles in DSR in IS. In order to start this debate we suggest a set of ethical principles for DSR in IS. While the interpretation and application of the principles might not always be straightforward, our argument is that all design science researchers in IS should give some consideration to ethics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.