“…When systematic evaluations are reported, the measures of quality are usually based on fairly crude error counts, in much the same way as OPAC studies have often defined search success in terms of numbers of hits (regardless of precision and recall ratios). 1,2 The two most commonly used measures of cataloging quality are "level," that is, the amount of data contained in records, or claimed to be contained, and error rate, that is, the number of errors, or types of error, found per record. Omissions, and partial omissions, are usually counted as errors, so that the error rate can then be determined against a particular standard, for example, the data that should be present in a "full level" record for a particular resource, where "full level" is defined ultimately by the cataloging agency, but usually based on external standards, such as those set out by a cooperative cataloging network.…”