Systematic literature reviews (SLRs) are a major tool for supporting evidencebased software engineering. Adapting the procedures involved in such a review to meet the needs of software engineering and its literature remains an ongoing process. As part of this process of refinement, we undertook two case studies which aimed 1) to compare the use of targeted manual searches with broad automated searches and 2) to compare different methods of reaching a consensus on quality. For Case 1, we compared a tertiary study of systematic literature reviews published between which used a manual search of selected journals and conferences and a replication of that study based on a broad automated search. We found that broad automated searches find more studies than manual restricted searches, but they may be of poor quality. Researchers undertaking SLRs may be justified in using targeted manual searches if they intend to omit low quality papers, or they are assessing research trends in research methodologies. For Case 2, we analyzed the process used to evaluate the quality of SLRs. We conclude that if quality evaluation of primary studies is a critical component of a specific SLR, assessments should be based on three independent evaluators incorporating at least two rounds of discussion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.