This study explores how researchers in the organizational sciences use and/or cite methodological ‘best practice’ (BP) articles. Namely, are scholars adhering fully to the prescribed practices they cite, or are they cherry picking from recommended practices without disclosing? Or worse yet, are scholars inaccurately following the methodological best practices they cite? To answer these questions, we selected three seminal and highly cited best practice articles published in Organizational Research Methods (ORM) within the past ten years. These articles offer clear and specific methodological recommendations for researchers as they make decisions regarding the design, measurement, and interpretation of empirical studies. We then gathered all articles that have cited these best practice pieces. Using comprehensive coding forms, we evaluated how authors are using and citing best practice articles (e.g., if they are appropriately following the recommended practices). Our results revealed substantial variation in how authors cited best practice articles, with 17.4% appropriately citing, 47.7% citing with minor inaccuracies, and 34.5% inappropriately citing BP articles. These findings shed light on the use (and misuse) of methodological recommendations, offering insight into how we can better improve our digestion and implementation of best practices as we design and test research and theory. Key implications and recommendations for editors, reviewers, and authors are discussed.
Accessed March 15, 2021. https://www.gvsu.edu/cms4/asset/ CC22E6AB-DC19-6BE8-D720E30BBEEBBAD3/salas_-_ miperc_presentation_final_9-21-17.pdf 3. Thomas A. 3 ways to maximize employee performance through engagement and teamwork. Inc.
Over the last couple of decades, studies using the experience sampling methodology (ESM) have been used with increasing frequency within the management-related sciences as the method allows researchers the opportunity to investigate questions involving ongoing, dynamic, intra-individual processes. Given the longitudinal nature of the methodology and the resulting multi-level data structure, there are sample-and measurementrelated issues that make ESM studies different from other methods commonly used in management research. Consequently, ESM studies have demands for reporting sampleand measurement-related information that differ from more commonly used methods. In the current paper, we review the conceptual foundations of sample and measurement issues in ESM studies and report the findings of a survey of the ESM studies to identify current reporting practices. We then offer clear, easy to implement recommendations for reporting sample-and measurement-related aspects of ESM studies. We hope that these recommendations will improve reporting of ESM studies and allow readers the opportunity to more fully and comprehensively evaluate the research presented.1 There are several variants of the ESM design, including daily diary studies and the event-contingent recoding method. We have chosen to use the term ESM in this paper, though our comments and recommendations are relevant to the other designs as well.
The focal article by Köhler et al. (2020) develops a highly compelling competency framework for excellent reviewing. It is certainly a key piece of a robust and open science ecosystem (Banks et al., 2019). The framework can and should be used as a guide for novice and seasoned reviewers "in defining what they should pay attention to, upon what their review should touch, how they should word their review, what kind of advice they need to provide, and how they should overall go about peer reviewing" (Köhler et al., 2020, p. 17). Creating specific reviewer training around the competencies takes this framework to the next level by actively teaching reviewers how to implement the proposed guidelines. Developing these skills is a critical component for bettering the publication process by enabling reviewers to provide feedback to colleagues that is more organized, reliable, and helpful. The focal article, by design, has an individual focus. Thus, this commentary will serve as an extension to the focal article, pushing the call for better reviewing further by making the case for results-blind reviews (RBR) as an environmental mechanism that better supports the framework and training of competencies discussed in the focal article. A conducive environment Literature on transfer-of-training explores under what conditions knowledge and skills learned in training are applied in action (Baldwin & Ford, 1988). This area of research emphasizes the importance of situational cues and environmental factors, noting that these largely determine whether or not learned competencies are actually applied in the workplace (Blume, Ford, Baldwin, & Huang, 2010; Colquitt, Lepine, & Noe, 2000). It is our contention that the training of competencies alone may not be enough for successful transfer of reviewing skills as situation/context matters to what behaviors are exhibited by individuals (Dalal, Alaybek, Sheng, Holland, & Tomassetti, in press). Furthermore, the training focuses on explicit behaviors of reviewers, but does not target the implicit or unconscious biases that may exist in the system itself. Bias in the dominant peer-review system The current dominant peer review system in the fields of industrial and organizational psychology (I-O), organizational behavior (OB), and human resources (HR) requires authors to submit full manuscripts, including results, to a journal. There is good reason to believe that manuscript results can bias reviewers' judgments toward the rest of the article. This, in turn, can undermine the efficacy of the focal article's framework. Research suggests that favoritism toward significant and novel findings in the organizational sciences is widespread (
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.