Objective: The goal of this systematic review was to examine the reporting quality of the method section of quantitative systematic reviews and meta-analyses from 2009 to 2016 in the field of industrial and organizational psychology with the help of the Meta-Analysis Reporting Standards (MARS), and to update previous research, such as the study of Aytug et al. (2012) and Dieckmann et al. (2009).Methods: A systematic search for quantitative systematic reviews and meta-analyses was conducted in the top 10 journals in the field of industrial and organizational psychology between January 2009 and April 2016. Data were extracted on study characteristics and items of the method section of MARS. A cross-classified multilevel model was analyzed, to test whether publication year and journal impact factor (JIF) were associated with the reporting quality scores of articles.Results: Compliance with MARS in the method section was generally inadequate in the random sample of 120 articles. Variation existed in the reporting of items. There were no significant effects of publication year and journal impact factor (JIF) on the reporting quality scores of articles.Conclusions: The reporting quality in the method section of systematic reviews and meta-analyses was still insufficient, therefore we recommend researchers to improve the reporting in their articles by using reporting standards like MARS.
Experts' beliefs embody a present state of knowledge. It is desirable to take this knowledge into account when doing analyses or making decisions. Yet ranking experts based on the merit of their beliefs is a difficult task. In this paper we show how experts can be ranked based on their knowledge and their level of (un)certainty. By letting experts specify their knowledge in the form of a probability distribution we can assess how accurately they can predict new data, and how appropriate their level of (un)certainty is. The expert's specified probability distribution can be seen as a prior in a Bayesian statistical setting. By extending an existing prior-data conflict measure to evaluate multiple priors, i.e. experts' beliefs, we can compare experts with each other and the data to evaluate their appropriateness. Using this method new research questions can be asked and answered, for instance: Which expert predicts the new data best? Is there agreement between my experts and the data? Which experts' representation is more valid or useful? Can we reach convergence between expert judgement and data? We provided an empirical example ranking (regional) directors of a large financial institution based on their predictions of turnover.
Due to a coding error the marginal likelihoods have not been correctly calculated for the empirical example and thus the Bayes Factors following from these marginal likelihoods are incorrect. The corrections required occur in Section 3.2 and in two paragraphs of the discussion in which the results are referred to. The corrections have limited consequences for the paper and the main conclusions hold. Additionally typos in Equations, and, an error in the numbering of the Equations are remedied.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.