Reading comprehension items are valid to the extent that they measure what subjects have understood of the stimulus material. This article reports an empirical analysis of two administrations of two reading tests: the first time, without the reading passages, and the second time, with the passages. Data from the two administrations were used to calculate the passage dependency of each test, that is, the extent to which questions can be answered without reading the texts upon which the questions are based. The two tests in this research, Davis Reading Test (Davis and Davis 1956) and Cooperative English Tests (Educational Testing Service 1960), exhibited little passage dependency. The stability of item types across the two presentation conditions is discussed, and a hierarchy of item‐type difficulty is established using latent trait measurement logits of difficulty. The results of this study suggest that classroom teachers should examine commercially available tests carefully for passage dependency. Furthermore, in constructing reading tests, teachers should avoid writing items that test general knowledge. Instead, teachers should strive to write items that test memory organization and that reveal whether or not inferences have been drawn.