Past research purporting to study employee resilience suffers from a lack of conceptual clarity about both the resilience construct and the methodological designs that examine resilience without ensuring the occurrence of significant adversity. The overall goal of this article is to address our contemporary understanding of employee resilience and identify pathways for the future advancement of resilience research in the workplace. We first address conceptual definitions of resilience both inside and outside of industrial and organizational psychology and make the case that researchers have generally failed to document the experience of significant adversity when studying resilience in working populations. Next, we discuss methods used to examine resilience, with an emphasis on distinguishing the capacity for resilience and the demonstration of resilience. Representative research is then reviewed by examining self-reports of resilience or resilience-related traits along with research on resilient and nonresilient trajectories following significant adversity. We then briefly address the issues involved in selecting resilient employees and building resilience in employees. The article concludes with recommendations for future research studying resilience in the workplace, including documenting significant adversity among employees, assessing multiple outcomes, using longitudinal designs with theoretically supported time lags, broadening the study of resilience to people in occupations outside the military who may face significant adversity, and addressing the potential dark side of an emphasis on resilience.
In employee selection and academic admission decisions, holistic (clinical) data combination methods continue to be relied upon and preferred by practitioners in our field. This meta-analysis examined and compared the relative predictive power of mechanical methods versus holistic methods in predicting multiple work (advancement, supervisory ratings of performance, and training performance) and academic (grade point average) criteria. There was consistent and substantial loss of validity when data were combined holistically-even by experts who are knowledgeable about the jobs and organizations in question-across multiple criteria in work and academic settings. In predicting job performance, the difference between the validity of mechanical and holistic data combination methods translated into an improvement in prediction of more than 50%. Implications for evidence-based practice are discussed.
There are currently 89 pharmacy programs in the United States, and each is confronted with evaluating a large number of applicants each year. Given the importance of producing effective professionals for the health and wellbeing of the public, selecting top-quality students who will master their training is of critical importance. The Pharmacy College Admission Test (PCAT) is a standardized test used by pharmacy programs to select students. The PCAT is considered by most pharmacy programs and in 2003 was required by 51 pharmacy programs as a piece of information for making admissions decisions. 1 The PCAT has been used since 1974 but not without controversy. Opinions are mixed about its effectiveness. Some scholars have variously argued either in favor of or against the use of the PCAT. 2 Positions against the PCAT run counter to the stance of the American Association of Colleges of Pharmacy (AACP) which endorses the use of PCAT scores as a part of pharmacy admissions decisions. 3 This mix of opinions is understandable given the range of validity study findings reported in the literature.Correlations between PCAT scores and GPA have ranged from a low of r = -0.09 4 to a high of r = 0.68. 4 Unfortunately, many of the validity studies have employed small samples from programs with highly selective admissions policies. Of critical importance is the predictive validity of the PCAT, the validity of alternative predictors (ie, prepharmacy grades and the SAT), and the investigation of the sources of correlation variability across studies. Addressing all of these issues is the objective of this study.The PCAT was first used on a national level in 1975. 5 In the fall of 2004, some of its content and structure was altered. The PCAT now includes an essay portion. The verbal section now contains sentence completion items and no longer specifically tests one's knowledge of antonyms. The biology section now includes items intended to assess knowledge of microbiology, and the quantitative section now includes precalculus and calculus. The number of verbal, biology, and reading comprehension items has been increased while the quantitative and chemistry sections have been reduced. Overall, the total number of multiple-choice items has decreased from approximately 300 multiple-choice items to 280.These changes notwithstanding, the PCAT continues to be a measure of ability and knowledge with multiplechoice items spread across the following 5 domains: ver- Objectives. Compare the validity of the Pharmacy College Admission Test (PCAT) and prepharmacy grade point average (GPA) in predicting performance in pharmacy school and professional licensing examinations. Methods. To quantitatively aggregate results across previous studies of the validity of the PCAT, the Hunter and Schmidt psychometric meta-analytic method was used. Relevant research articles were gathered from multiple databases. Correlations between the PCAT and GPAs or individual course grades were the most commonly presented data. Results. The PCAT and prepharmacy GPA were...
Given the serious consequences of making ill‐fated admissions and funding decisions for applicants to graduate and professional school, it is important to rely on sound evidence to optimize such judgments. Previous meta‐analytic research has demonstrated the generalizable validity of the GRE® General Test for predicting academic achievement. That research does not address predictive validity for specific populations and situations or the predictive validity of the GRE Analytical Writing section introduced in October 2002. Furthermore, much of the past GRE predictive validity research is primarily based on approaches that are correlational and univariate only. Stakeholders familiar with GRE predictive validity mainly in the form of zero‐order correlation coefficients might automatically interpret the usefulness of the GRE solely through the prism of Cohen's (1988) guidelines for judging effect sizes and without regard to the larger context. However, by using innovative and multivariate approaches to conceptualize and measure GRE predictive validity within the larger context, our investigation reveals the substantial value of the GRE General Test, including its Analytical Writing section, for predicting graduate school grades.
Unlike previous research that found small differences between population standard deviations and applicant pool standard deviations (P. R. Sackett & D. J. Ostgaard, 1994; D. S. Ones & C. Viswesvaran, 2003), this study revealed a 23% disparity between Law School Admission Test (LSAT) scores of all LSAT test takers and those of LSAT test takers who applied to law school. This study also illustrated robust applicant self-selection behavior across different law school ranks. These findings are important, because predictor scores of applicants who know their scores in advance and perceive small selection ratios necessitate substantially smaller range restriction corrections than those that would be required by population standard deviations. Furthermore, these findings more generally reveal that applicants who know their scores in advance behave quite differently from applicants who do not.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.