A consequence of the growing number of empirical studies in software engineering is the need to adopt systematic approaches to assessing and aggregating research outcomes in order to provide a balanced and objective summary of research evidence for a particular topic. The paper reports experiences with applying one such approach, the practice of systematic literature review, to the published studies relevant to topics within the software engineering domain. The systematic literature review process is summarised, a number of reviews being undertaken by the authors and others are described and some lessons about the applicability of this practice to software engineering are extracted.The basic systematic literature review process seems appropriate to software engineering and the preparation and validation of a review protocol in advance of a review activity is especially valuable. The paper highlights areas where some adaptation of the process to accommodate the domain-specific characteristics of software engineering is needed as well as areas where improvements to current software engineering infrastructure and practices would enhance its applicability. In particular, infrastructure support provided by software engineering indexing databases is inadequate. Also, the quality of abstracts is poor; it is usually not possible to judge the relevance of a study from a review of the abstract alone.
advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Additional information:Use policyThe full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-prot purposes provided that:• a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders.Please consult the full DRO policy for further details.
The Operational Multiscale Environment Model with Grid Adaptivity (OMEGA) and its embedded Atmospheric Dispersion Model is a new atmospheric simulation system for real-time hazard prediction, conceived out of a need to advance the state of the art in numerical weather prediction in order to improve the capability to predict the transport and diffusion of hazardous releases. OMEGA is based upon an unstructured grid that makes possible a continuously varying horizontal grid resolution ranging from 100 km down to 1 km and a vertical resolution from a few tens of meters in the boundary layer to 1 km in the free atmosphere. OMEGA is also naturally scale spanning because its unstructured grid permits the addition of grid elements at any point in space and time. In particular, unstructured grid cells in the horizontal dimension can increase local resolution to better capture topography or the important physical features of the atmospheric circulation and cloud dynamics. This means that OMEGA can readily adapt its grid to stationary surface or terrain features, or to dynamic features in the evolving weather pattern. While adaptive numerical techniques have yet to be extensively applied in atmospheric models, the OMEGA model is the first model to exploit the adaptive nature of an unstructured gridding technique for atmospheric simulation and hence real-time hazard prediction. The purpose of this paper is to provide a detailed description of the OMEGA model, the OMEGA system, and a detailed comparison of OMEGA forecast results with data.
Systematic literature reviews (SLRs) are a major tool for supporting evidencebased software engineering. Adapting the procedures involved in such a review to meet the needs of software engineering and its literature remains an ongoing process. As part of this process of refinement, we undertook two case studies which aimed 1) to compare the use of targeted manual searches with broad automated searches and 2) to compare different methods of reaching a consensus on quality. For Case 1, we compared a tertiary study of systematic literature reviews published between which used a manual search of selected journals and conferences and a replication of that study based on a broad automated search. We found that broad automated searches find more studies than manual restricted searches, but they may be of poor quality. Researchers undertaking SLRs may be justified in using targeted manual searches if they intend to omit low quality papers, or they are assessing research trends in research methodologies. For Case 2, we analyzed the process used to evaluate the quality of SLRs. We conclude that if quality evaluation of primary studies is a critical component of a specific SLR, assessments should be based on three independent evaluators incorporating at least two rounds of discussion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.