The adequate measurement of abstract constructs is perhaps the greatest challenge to understanding the behavior of people in organizations. Problems with the reliability and validity of measures used on survey questionnaires continue to lead to difficulties in interpreting the results of field research. Price and Mueller suggest that measurement problems may be due to the lack of a well-established framework to guide researchers through the various stages of scale development. This article provides a conceptual framework and a straightforward guide for the development of scales in accordance with established psychometric principles for use in field studies.
Questionnaires are the most commonly used method of data collection infield research (Stone, 1978). Problems with the reliability and validity of measures used on questionnaires has often led to difficulties in interpreting the results of field research (Cook, Hepworth, Wall & Wart-, 1981; Schriesheim, Powers, Scandura, Gardiner & Lankau, 1993). This article reviews scale development procedures for 277 measures used in 75 articles published in leading academic journals from 1989 to 1994. It points out some of the problems encountered and provides examples of what could be considered “best practices “in scale development and reporting. Based on the review, recommendations are made to improve the scale development process.
The purpose of this paper is to describe the process for developing reliable and valid measurement instruments that can be used in any hospitality industry field research setting. Many instances exist in which the researcher cannot find an adequate or appropriate existing scale to measure an important construct. In these situations it is necessary to create a new scale. Failure to carefully develop a measurement instrument can result in invalid and unintegratable data. Hence, a systematic seven-step process is outlined here to assist researchers in devising usable scales. Examples from the authors' own research are used to illustrate some of the steps in the process
Although procedures for assessing content validity have been widely publicized for many years, Hinkin noted that there continue to be problems with the content validity of measures used in organizational research. Anderson and Gerbing, and Schriesheim, Powers, Scandura, Gardiner, and Lankau discussed the problems associated with typical content validity assessment and presented techniques that can be used to assess the empirical distinctiveness of a set of survey items. This article reviews these techniques and presents an analysis of variance procedure that can provide a higher degree of confidence in determining item integrity and scale content validity. The utility of this technique is demonstrated by using two samples and two different measures.Although procedures for assessing content validity have been widely publicized for many years, Hinkin noted that there continue to be problems with the content validity of measures used in organizational research. Anderson and Gerbing, and Schriesheim, Powers, Scandura, Gardiner, and Lankau discussed the problems associated with typical content validity assessment and presented techniques that can be used to assess the empirical distinctiveness of a set of survey items. This article reviews these techniques and presents an analysis of variance procedure that can provide a higher degree of confidence in determining item integrity and scale content validity. The utility of this technique is demonstrated by using two samples and two different measures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.