The assessment of expertise is vital both in practical situations that call for expert judgment and in theoretical research on the psychology of experts. It can be difficult, however, to determine whether a judge is in fact performing expertly. Our goal was to develop an empirical measure of expert judgment. We argue that two necessary characteristics of expertise are discrimination of the various stimuli in the domain and consistent treatment of similar stimuli. We combine measures of these characteristics to form a ratio we call the Cochran-Weiss-Shanteau (CWS) index of expertise. The proposed index was demonstrated using two studies that distinguished experts from nonexperts based on their judgmental performance. The index provides new insights into expertise and offers a partial definition of expertise that may be useful in a variety of theoretical and applied settings. Potential applications of this research include selection, training, and evaluation of experts and of expert-machine systems.
To estimate frequencies of behaviors not carried out in public view, researchers generally must rely on self‐report data. We explored 2 factors expected to influence the decision to reveal: (a) privacy (anonymity vs. confidentiality) and (b) normalization (providing information so that a behavior is reputedly commonplace or rare). We administered a questionnaire to I55 undergraduates. For 79 respondents, we had corroborative information regarding a negative behavior: cheating. The privacy variable had an enormous impact; of those who had cheated, 25% acknowledged having done so under confidentiality, but 74% admitted the behavior under anonymity. Normalization had no effect. There were also dramatic differences between anonymity and confidentiality on some of our other questions, for which we did not have validation.
This study demonstrated, through multiple levels of analysis, that little transcriptional similarity exists between rat MIA and human OA derived cartilage. As disease modulatory activities for potential therapeutic agents often do not translate from animal models to human disease, this and like studies may provide a basis for understanding the discrepancies.
The 5s estimated the average of several lengths presented serially one at a time. In Exp. I, the judgment was made only at the end of the sequence. In Exp. II, 5 estimated a cumulative average as each new length was presented. The main phases of these experiments used sequences of six lengths. For the most part, each S's data could be described by a subjective averaging model as tested in single-5 analyses. There was a general recency effect, the later lengths in the sequence having greater influence. Recency was fairly uniform across .Ss with the end responding procedure of Exp. I, but large individual differences in the serial position curves appeared with the continuous responding procedure of Exp. II. In Exp. Ill, two hypotheses about the cause of recency were tested, but received little support. Functional measurement technique showed that subjective length differed from objective length, apparently by a constant error for each 5. It was noted that the present methods could be applied to psychophysical scaling of other stimulus dimensions.
Experts who judge people usually provide opinions. It can be challenging to evaluate the professional performance of those experts, because for many domains there is no applicable external standard against which to verify the opinions. We review traditional methods for assessment and propose the purely empirical CWS approach as an alternative. Expert judgment entails discriminating among the various stimuli within the domain as well as being consistent when judging similar stimuli. We combine observed measures of these two components to form a ratio that we call the CWS index of expertise. We demonstrate the value of the index in an analysis of prioritization judgments made by occupational therapy students before and after they received specific training. The students' CWS scores improved considerably after training. The promise of the index as a selection tool is supported by the positive correlation of pre-training scores with both post-training scores and with course grades.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.