Employees face a variety of work demands that place a premium on personal attributes, such as the degree to which they can be depended on to work independently, deal with stress, and interact positively with coworkers and customers. We examine evidence for the importance of these personality attributes using research strategies intended to answer three fundamental questions, including (a) how well does employees' standing on these attributes predict job performance?, (b) what types of attributes do employers seek to evaluate in interviews when considering applicants?, and (c) what types of attributes are rated as important for performance in a broad sampling of occupations across the U.S. economy? We summarize and integrate results from these three strategies using the Big Five personality dimensions as our organizing framework. Our findings indicate that personal attributes related to Conscientiousness and Agreeableness are important for success across many jobs, spanning across low to high levels of job complexity, training, and experience necessary to qualify for employment. The strategies lead to differing conclusions about the relative importance of Emotional Stability and Extraversion. We note implications for job seekers, for interventions aimed at changing standing on these attributes, and for employers.
One of the typical roles of industrial–organizational (I-O) psychologists working as practitioners is administering employee surveys measuring job satisfaction/engagement. Traditionally, this work has involved developing (or choosing) the items for the survey, administering the items to employees, analyzing the data, and providing stakeholders with summary results (e.g., percentages of positive responses, item means). In recent years, I-O psychologists moved into uncharted territory via the use of survey key driver analysis (SKDA), which aims to identify the most critical items in a survey for action planning purposes. Typically, this analysis involves correlating (or regressing) a self-report criterion item (e.g., “considering everything, how satisfied are you with your job”) with (or on) each of the remaining survey items in an attempt to identify which items are “driving” job satisfaction/engagement. It is also possible to use an index score (i.e., a scale score formed from several items) as the criterion instead of a single item. That the criterion measure (regardless of being a single item or an index) is internal to the survey from which predictors are drawn distinguishes this practice from linkage research. This methodology is not widely covered in survey methodology coursework, and there are few peer-reviewed articles on it. Yet, a number of practitioners are marketing this service to their clients. In this focal article, a group of practitioners with extensive applied survey research experience uncovers several methodological issues with SKDA. Data from a large multiorganizational survey are used to back up claims about these issues. One issue is that SKDA ignores the psychometric reality that item standard deviations impact which items are chosen as drivers. Another issue is that the analysis ignores the factor structure of survey item responses. Furthermore, conducting this analysis each time a survey is administered conflicts with the lack of situational and temporal specificity. Additionally, it is problematic to imply causal relationships from the correlational data seen in most surveys. Most surprisingly, randomly choosing items out of a hat yields validities similar to those from conducting the analysis. Thus, we recommend that survey providers stop conducting SKDA until they can produce science that backs up this practice. These issues, in concert with the lack of literature examining the practice, make rigorous evaluations of SKDA a timely inquiry.
Little is known about the reliability of college grades relative to how prominently they are used in educational research, and the results to date tend to be based on small sample studies or are decades old. This study uses two large databases (N > 800,000) from over 200 educational institutions spanning 13 years and finds that both first-year and overall college GPA can be expected to be highly reliable measures of academic performance, with reliability estimated at .86 for first-year GPA and .93 for overall GPA. Additionally, reliabilities vary moderately by academic discipline, and within-school grade intercorrelations are highly stable over time. These findings are consistent with a hierarchical structure of academic ability. Practical implications for decision making and measurement using GPA are discussed.
We examine 123 data sets from validation studies of a single five‐factor model‐based occupational personality measure for evidence of curvilinear relationships with job performance. Research has produced discrepant findings about whether and when to expect curvilinear relationships between normal range personality measures and job performance. Previous studies have relied on small and unsystematic sampling, a variety of noncomparable performance criteria, the use of personality inventories for which construct validity evidence is not immediately available, and a focus on only one or two of the Big Five personality factors. We report minimal evidence of curvilinearity, suggesting that these effects are unlikely to undermine typical uses of personality test scores in decision making. Any expected declines in performance at high ends of the predictor range were very small on average and would be highly unlikely to produce scenarios in which those passing a realistic cut score would underperform those screened out. Indices of job complexity and the importance of the personality trait did not moderate the forms of each personality–performance relationship. The results are useful for evaluating whether curvilinearity is likely to be an issue when self‐report personality assessments are used to make decisions with tangible employment consequences.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.