The creation of abridged versions of research tools is a common, justifiable process, but unfortunately it is often carried out without due methodological care and regard for the consequences. Smith and collaborators (2000) have already written about the mistakes that can be made, but their article has not had much practical impact. There are two main mistakes commonly made by researchers: assuming the transferability of validity and reliability between the full and shortened versions and using less stringent criteria to assess the validity and reliability of short forms. These two problems manifest as nine sins committed during the construction of short forms. Here we present procedures designed to allow researchers to avoid these mistakes and create abridged versions of research tools that are as reliable as possible and to assess the costs of the various methods of abridging questionnaires. To this end we determine the expected length of the tool and weight the benefits of reduced questionnaire completion time against the loss of reliability. We also estimate the shared variance of the full and short versions and classification accuracy of the new, short version. We compared quality of short form obtained from three most common statistical techniques for abridging questionnaires. We analysed data from a sample of 519 persons; 309 (59.5%) completed the paper version of the Self-Narrative Inclination Questionnaire (IAN-R), and 210 (40.5%) participated in online tests. Abridgements based on factor loadings and Cronbach’s # were similarly effective and these two methods had a slight advantage over a method based on item response theory-based analyses of difficulty and discriminatory power.