Although self-rated or self-scored selection measures are commonly used in selection contexts, they are potentially susceptible to applicant response distortion or faking. The response elaboration technique (RET), which requires job applicants to provide supporting information to justify their responses, has been identified as a potential way to minimize applicant response distortion. In a large-scale, high-stakes selection context (N = 16,304), we investigate the extent to which RET affects responding on a biodata test as well as the underlying reasons for any potential effect. We find that asking job applicants to elaborate their responses leads to overall lower scores on a biodata test. Item verifiability affects the extent to which RET decreases faking, which we suggest is due to increased accountability. In addition, verbal ability was more strongly related to biodata item scores when items require elaboration, although the effect of verbal ability was small. The implications of these findings for reducing faking in personnel selection are delineated.There are a variety of different methods that could be used to collect information about job applicants in order to make selection decisions. Some measures (e.g., cognitive ability measures, job knowledge tests) have objectively correct answers and ask applicants to demonstrate what is being measured. Other measures (e.g., job interviews, assessment centers) have answers judged subjectively by third parties (such as interviewers or assessors) where applicants are asked to either describe or demonstrate what is being measured. Another class of measures (e.g., personality tests, biodata measures) has answers rated subjectively by the applicants (i.e., self-ratings) where applicants are asked to self-assess, self-rate, or selfscore on what is being measured.