The indicativity of a type of catalog information (or catalog field) is intended as a measure of how well the information in the field conveys the contents of the document it represents. In the experiments reported here, indicativity is measured for several catalog fields by comparing users' evaluations of the relevance of documents on the basis of the information in a given field with their judgments on the basis of full text. A small but statistically significant increase in indicativity is found as the length of a catalog field (as measured by the number of different content‐word stems) is increased. The title field is found to have an indicativity of 0.64; matching subjects, 0.67; subjects, 0.70; abstract, 0.73. Despite the relatively small gain in indicativity for the longer fields, users value the longer fields highly for determining relevance if one judges by the amount of time they spend on them. Support for the hypothesis that the indicativity measure does not fully reflect the value of the fields is developed. Thus, the question of the cost effectiveness of the longer fields is unresolved. Other aspects of catalog field utility studied under the Project Intrex equipments are also reported.
An experimental computer intermediary system, CONIT, that assists users in accessing and searching heterogeneous retrieval systems has been enhanced with various search aids. Controlled experiments have been conducted to compare the effectiveness of the enhanced CONIT intermediary with that of human expert intermediary search specialists. Some 16 end users, none of whom had previously operated either CONIT or any of the four connected retrieval systems, performed searches on 20 different topics using CONIT with no assistance other than that provided by CONIT itself (except to recover from computer/software bugs). These same users also performed searches on the same topics with the help of human expert intermediaries who searched using the retrieval systems directly. Sometimes CONIT and sometimes the human expert were clearly superior in terms of such parameters as recall and search time. In general, however, users searching alone with CONIT achieved somewhat higher online recall at the expense of longer session times. We conclude that advanced experimental intermediary techniques are now capable of providing search assistance whose effectiveness at least approximates that of human intermediaries in some contexts. Also analyzed is the cost effectiveness of current intermediary systems. Finally, consideration is given to the prospects for much more advanced systems which would perform such functions as automatic data-base selection and the simulation of human experts, and thereby make information retrieval more effective for all classes of users.
The need for a network of heterogeneous interactive bibliographic information retrieval systems is projected from the facts of wide acceptance and growing demand for these systems and the limitations of their use caused by limited online data base size.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.