This paper describes a project in which an analysis was undertaken of user queries addressed to seven libraries which manage archives of widely varying still and moving image material. The sampling procedure is described, in which queries obtained from each library were broadly categorised by image content, identification and accessibility. Attention is focused on the image content requests, for which a categorisation based on facet analysis is developed. The analytical tool which is used for this purpose is based on a schema already well established for the analysis of levels of meaning in images. The project demonstrates the possibility of formulating a general categorisation of requests which seek widely different still and moving image material. The paper concludes with observations on the potential value of embedding such a schema within the user interface of unmediated-query visual information retrieval systems.
This paper surveys theoretical and practical issues associated with a particular type of information retrieval problem, namely that where the information need is pictorial. The paper is contextualised by the notion of a visually stimulated society, in which the ease of record creation and transmission in the visual medium is contrasted with the difficulty of gaining effective subject access to the world's stores of such records. The technological developments which, in casting the visual image in electronic form, have contributed so significantly to its availability are reviewed briefly, as a prelude to the main thrust of the paper. Concentrating on still and moving pictorial forms of the visual image, the paper dwells on issues related to the subject indexing of pictorial material and discusses four models of pictorial information retrieval corresponding with permutations of the verbal and visual modes for the representation of picture content and of information need.
This paper attempts to review and characterise the problem of the semantic gap in image retrieval and the attempts being made to bridge it. In particular, we draw from our own experience in user queries, automatic annotation and ontological techniques. The first section of the paper describes a characterisation of the semantic gap as a hierarchy between the raw media and full semantic understanding of the media's content. The second section discusses real users' queries with respect to the semantic gap. The final sections of the paper describe our own experience in attempting to bridge the semantic gap. In particular we discuss our work on auto-annotation and semantic-space models of image retrieval in order to bridge the gap from the bottom up, and the use of ontologies, which capture more semantics than keyword object labels alone, as a technique for bridging the gap from the top down.
In the commercial use of picture collections, a heavy dependency continues to be exhibited on a concept-based image retrieval paradigm in which the query is verbalised by the client and resolved as a metadata text-matching operation. The practical and philosophical challenges posed by the indexing aspect of image metadata construction are significant and frequently expressed. Nevertheless, it has taken image digitisation to bring this particular information retrieval problem to prominence in the research agenda. Metamorphosed into a binary data structure, the digital image offers some enticing processing opportunities which content-based image retrieval techniques are exploiting with developing success. Drawing on studies of user need, this paper seeks to explain why a heavy dependency will continue to be placed on concept-based rather than content-based image retrieval techniques within archival image collections. In contrast, the promising nature of content-based techniques from the viewpoint of a growing clientele with less traditional visual information needs will also be considered. The paper concludes by offering the view that, while both concept-based and content-based approaches suffer from operational limitations, the further development of a hybrid image retrieval paradigm which combines the two approaches makes a potentially valuable contribution to the research agenda for visual image retrieval.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.