Data from the California Learning Assessment System are used to examine certain characteristics of tests designed as the composites of items of different modes. The characteristics include rater severity, test information, and definition of the latent variable. Three different assessment modes-multiple-choice, open-ended, and investigation items (the latter two are referred to as performance-based modes)-were combined in a test across three different test forms. Rater severity was investigated by incorporating a rater parameter for each rater in an item response model that then was used to analyze the data. Some rater severities were found to be quite extreme, and the impact of this variation in rater severities on both total scores and trait level estimates was examined. Within-rater variation in rater severity also was examined and was found to have significant variation. The information contribution of the three modes was compared. Performance-based items provided more information than multiple-choice items and also provided greatest precision for higher levels of the latent variable. A projection-like method was applied to investigate the effects of assessment mode on the definition of the latent variable. The multiple-choice items added information to the performance-based variable. The results of the analysis also showed that the projection-like method did not practically differ from the results when the latent trait was defined jointly by both the multiple-choice and the performance-based items. Index terms: equating, linking, multiple assessment modes, polytomous item response models, rater effects. Multiple-choice (MC) items have been used widely in psychological and educational testing for many years. Administrative convenience and computerized scoring make them very convenient. However, MC items have been criticized as being inadequate to fully assess examinees' abilities. Moreover, test-wiseness may seriously contaminate the measurement. Recently, there has been an increased interest in performance-based (PB) items (or constructed-response items) as an alternative to Mac items. A PB item refers to any item format that requires the examinee to generate a response in any way other than selecting from a short list of alternative answers as in MC items (Pollack, Rock, & Jenkins, 1992). The different types of response formats, such as Mac items and the many types of PB items, are referred to here as different assessment modes. The main advantages of PB items are that: (1) they provide a more direct representation of content specifications (face validity and content validity), (2) they provide more diagnostic information about examinees' learning difficulties from their responses, (3) examinees prefer them to Mac items, and (4) the test formats may stimulate the teaching of important skills, such as problem solving and essay writing (Grima & Liang, 1992). However, Mac items are more economical to score and have well-established patterns of reliability. PB items are more difficult to score objectively and re...