Abstract. Most enterprises operate within a complex and ever-changing context. To ensure that requirements keep pace with changing context, users' feedback is advocated to ensure that the requirements knowledge is refreshed and reflects the degree to which the system meets its design objectives. The traditional approach to users' feedback, which is based on data mining and text analysis, is often limited, partly due to the ad-hoc nature of users' feedback and, also, the methods used to acquire it. To maximize the expressiveness of users' feedback and still be able to efficiently analyse it, we propose that feedback acquisition should be designed with that goal in mind. This paper contributes to that aim by presenting an empirical study that investigates users' perspectives on feedback constituents and how they could be structured. This will provide a baseline for modelling and customizing feedback for enterprise systems in order to maintain and evolve their requirements.
Crowdsourcing is an emerging online paradigm for problem solving which involves a large number of people often recruited on a voluntary basis and given, as a reward, some tangible or intangible incentives. It harnesses the power of the crowd for minimizing costs and, also, to solve problems which inherently require a large, decentralized and diverse crowd. In this paper, we advocate the potential of crowdsourcing for software evaluation. This is especially true in the case of complex and highly variable software systems, which work in diverse, even unpredictable, contexts. The crowd can enrich and keep the timeliness of the developers' knowledge about software evaluation via their iterative feedback. Although this seems promising, crowdsourcing evaluation introduces a new range of challenges mainly on how to organize the crowd and provide the right platforms to obtain and process their input. We focus on the activity of obtaining evaluation feedback from the crowd and conduct two focus groups to understand the various aspects of such an activity. We finally report on a set of challenges to address and realize correct and efficient crowdsourcing mechanisms for software evaluation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.