Software development in highly dynamic environments imposes high risks to development organizations. One such risk is that the developed software may be of only little or no value to customers, wasting the invested development efforts. Continuous experimentation, as an experiment-driven development approach, may reduce such development risks by iteratively testing product and service assumptions that are critical to the success of the software. Although several experiment-driven development approaches are available, there is little guidance available on how to introduce continuous experimentation into an organization. This article presents a multiple-case study that aims at better understanding the process of introducing continuous experimentation into an organization with an already established development process. The results from the study show that companies are open to adopting such an approach and learning throughout the introduction process. Several benefits were obtained, such as reduced development efforts, deeper customer insights, and better support for development decisions. Challenges included complex stakeholder structures, difficulties in defining success criteria, and building experimentation skills. Our findings indicate that organizational factors may limit the benefits of experimentation. Moreover, introducing continuous experimentation requires fundamental changes in how companies operate, and a systematic introduction process can increase the chances of a successful start.
Processes and practices used in data science projects have been reshaping especially over the last decade. These are different from their software engineering counterparts. However, to a large extent, data science relies on software, and, once taken to use, the results of a data science project are often embedded in software context. Hence, seeking synergy between software engineering and data science might open promising avenues. However, while there are various studies on data science workflows and data science project teams, there have been no attempts to combine these two very interlinked aspects. Furthermore, existing studies usually focus on practices within one company. Our study will fill these gaps with a multi-company case study, concentrating both on the roles found in data science project teams as well as the process. In this paper, we have studied a number of practicing data scientists to understand a typical process flow for a data science project. In addition, we studied the involved roles and the teamwork that would take place in the data context. Our analysis revealed three main elements of data science projects: Experimentation, Development Approach, and Multidisciplinary team(work). These key concepts are further broken down to 13 different sub-themes in total. The found themes pinpoint critical elements and challenges found in data science projects, which are still often done in an ad-hoc fashion. Finally, we compare the results with modern software development to analyse how good a match there is.
Abstract. [Context and motivation]In order to build successful software products and services, customer involvement and an understanding of customers' requirements and behaviours during the development process are essential.[Question/Problem] Although continuous deployment is gaining attention in the software industry as an approach for continuously learning from customers, there is no common overview of the topic yet. [Principal ideas/results] To provide a common overview, we conduct a secondary study that explores the state of reported evidence on customer input during continuous deployment in software engineering, including the potential benefits, challenges, methods and tools of the field.[Contribution] We report on a systematic literature review covering 25 primary studies. Our analysis of these studies reveals that although customer involvement in continuous deployment is highly relevant in the software industry today, it has been relatively unexplored in academic research. The field is seen as beneficial, but there are a number of challenges related to it, such as misperceptions among customers. In addition to providing a comprehensive overview of the research field, we clarify the gaps in knowledge that need to be studied further.
Software companies need capabilities to evaluate the user value and the success of their products. This is especially crucial for highly competitive markets, such as the mobile game industry, where thousands of new games are introduced every month. Game companies often run continuous experiments as an integrated part of the overall development process. This paper presents a game company's journey on experimentation, and describes how the experiments are used at different stages of the development cycle to produce reliable, meaningful data for developers as well as how to balance between different data collection methods. Our study indicates that experiments are important in all stages of the development in different forms. Early stages in the development experiments can be run with proxy users due to lack of real users, whereas later in the development Key Performance Indicator (KPI) metrics play the most important role in experiments. Establishing concrete goals for the experiments, balancing between qualitative and quantitative data collection, experimentation throughout the development process with the guidance of an efficient leadership appears to be the key to success.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.