Personnel selection involves exchanges of information between job market actors (applicants and organizations). These actors do not have an incentive to exchange accurate information about their ability and commitment to the employment relationship unless it is to their advantage. This state of affairs explains numerous phenomena in personnel selection (e.g., faking). Signaling theory describes a mechanism by which parties with partly conflicting interests (and thus an incentive for deception) can nevertheless exchange accurate information. We apply signaling theory to personnel selection, distinguishing between adaptive relationships between applicants and organizations, among applicants, and among organizations. In each case, repeated adaptations and counteradaptations between actors can lead to situations of equilibrium or escalation (arms races). We show that viewing personnel selection as a network of adaptive relationships among job market actors enables an understanding of both classic and underexplored micro- and macro-level selection phenomena and their dynamic interactions.
Applicant use of impression management (IM) tactics plays a central role in employment interviews. IM includes behaviors intended to create an impression of competence and likability, and avoid negative impressions. Applicants can influence interviewers’ impressions using both honest and deceptive IM, but measurement of IM has yet to distinguish these two constructs. The goal of the present research was to develop a self‐report Honest Interview Impression Management (HIIM) measure and use this to investigate differential antecedents and consequences of honest and deceptive IM. We report the results of five independent studies (total N = 1,470 interviewees). Studies 1–3 detail the creation of a self‐report measure of honest IM. Studies 4 and 5 utilize this measure to understand the relations between honest and deceptive IM, and their antecedents and consequences. Results demonstrate that honest and deceptive IM are positively related but distinct constructs that have unique antecedents (i.e., age, individual differences, attitudes, situational, and target characteristics) and differentially impact interview outcomes and ratings. Finally, we present a short measure of honest and deceptive IM to be used for time‐sensitive data collection.
In the past years, several authors have proposed theoretical models of faking at selection. Although these models greatly improved our understanding of applicant faking, they mostly offer static approaches. In contrast, we propose a model of applicant faking derived from signaling theory, which describes faking as a dynamic process driven by applicants' and organizations' adaptations in a competitive environment. We argue that faking depends on applicants' motivation and capacity to fake, which are determined by individual differences in skills, abilities, and stable attitudes, as well as by perceptions of the competition, but also on applicants' perceived opportunities versus risks to fake, which are contingent upon organizations' measures to increase the costs of faking. We further explain how selection outcomes can trigger adaptations of applicants, such as faking in subsequent selection encounters, and of organizations, such as changes in measures making faking costly for applicants in the long term.
Various surveys suggest LinkedIn is used as a screening and selection tool by many hiring managers. Despite this widespread use, fairly little is known about whether LinkedIn meets established selection criteria, such as reliability, validity, and legality (i.e., no adverse impact). We examine the properties of LinkedIn‐based assessments in two studies. Study 1 shows that raters reach acceptable levels of consistency in their assessments of applicant skills, personality, and cognitive ability. Initial ratings also correlate with subsequent ratings done 1‐year later (i.e., demonstrating temporal stability), with slightly higher correlations when profile updates are taken into account. Initial LinkedIn‐based ratings correlate with self‐reports for more visible skills (leadership, communication, and planning) and personality traits (Extraversion), and for cognitive ability. LinkedIn‐based hiring recommendations are positively associated with indicators of career success. Potential adverse impact is also limited. Profiles that are longer, include a picture, and have more connections are rated more positively. Some of those features are valid cues to applicants’ characteristics (e.g., applicants high on Conscientiousness have longer profiles). In Study 2, we show that an itemized LinkedIn assessment is more effective than a global assessment. Implications of these findings for selection and future research are discussed.
Applicants use honest and deceptive impression management (IM) in employment interviews. Deceptive IM is especially problematic because it can lead organizations to hire less competent but deceptive applicants if interviewers are not able to identify the deception. We investigated interviewers’ capacity to detect IM in 5 experimental studies using real‐time video coding of IM (N = 246 professional interviewers and 270 novice interviewers). Interviewers’ attempts to detect applicants’ IM were often unsuccessful. Interviewers were better at detecting honest than deceptive IM. Interview question type affected IM detection, but interviewers’ experience did not. Finally, interviewers’ perceptions of IM use by applicants were related to their evaluations of applicants’ performance in the interview. Interviewers’ attempts to adjust their evaluations of applicants they perceive to use deceptive IM may fail because they cannot correctly identify when applicants actually engage in various IM tactics. Helping interviewers to better identify deceptive IM tactics used by applicants may increase the validity of employment interviews.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.