Leveraging crowdsourcing in software development has received growing attention in research and practice. Crowd feedback offers a scalable and flexible way to evaluate software design solutions and the potential of crowd-feedback systems has been demonstrated in different contexts by existing research studies. However, previous research lacks a deep understanding of the effects of individual design features of crowd-feedback systems on feedback quality and quantity. Additionally, existing studies primarily focused on understanding the requirements of feedback requesters but have not fully explored the qualitative perspectives of crowd-based feedback providers. In this paper, we address these research gaps with two research studies. In study 1, we conducted a feature analysis (N=10) and concluded that from a user perspective, a crowd-feedback system should have five core features (scenario, speech-to-text, markers, categories, and star rating). In the second study, we analyzed the effects of the design features on crowdworkers' perceptions and feedback outcomes (N=210). We learned that offering feedback providers scenarios as the context of use is perceived as most important. Regarding the resulting feedback quality, we discovered that more features are not always better as overwhelming feedback providers might decrease feedback quality. Offering feedback providers categories as inspiration can increase the feedback quantity. With our work, we contribute to research on crowd-feedback systems by aligning crowdworker perspectives and feedback outcomes and thereby making the software evaluation not only more scalable but also more human-centered.
Figure 1: Our interactive coding system enables a crowd of non-experts to code semi-structured qualitative data. (1) First, the workers code the primary topics of all interview answers. (2) Then, the workers code each interview answer with the respective specific codes in separate tasks for each primary topic. (3) Finally, the workers' most mentioned code is set as final label. The agreement among workers indicates the crowds' consistency and the agreement between the workers' and experts' final label shows the accuracy of the crowd.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.