Personalization of information retrieval tailors search towards individual users to meet their particular information needs by taking into account information about users and their contexts, often through implicit sources of evidence such as user behaviors. Task types have been shown to influence search behaviors including usefulness judgments. This paper reports on an investigation of user behaviors associated with different task types. Twenty-two undergraduate journalism students participated in a controlled lab experiment, each searching on four tasks which varied on four dimensions: complexity, task product, task goal and task level. Results indicate regular differences associated with different task characteristics in several search behaviors, including task completion time, decision time (the time taken to decide whether a document is useful or not), and eye fixations, etc. We suggest these behaviors can be used as implicit indicators of the user's task type.
a b s t r a c tWe report on an investigation into people's behaviors on information search tasks, specifically the relation between eye movement patterns and task characteristics. We conducted two independent user studies (n = 32 and n = 40), one with journalism tasks and the other with genomics tasks. The tasks were constructed to represent information needs of these two different users groups and to vary in several dimensions according to a task classification scheme. For each participant we classified eye gaze data to construct models of their reading patterns. The reading models were analyzed with respect to the effect of task types and Web page types on reading eye movement patterns. We report on relationships between tasks and individual reading behaviors at the task and page level. Specifically we show that transitions between scanning and reading behavior in eye movement patterns and the amount of text processed may be an implicit indicator of the current task type facets. This may be useful in building user and task models that can be useful in personalization of information systems and so address design demands driven by increasingly complex user actions with information systems. One of the contributions of this research is a new methodology to model information search behavior and investigate information acquisition and cognitive processing in interactive information tasks.
Self-assessment of topic/task knowledge is a human metacognitive capacity that impacts information behavior, for example through selection of learning and search strategies. It is often used as a measure in experiments for evaluation of results and those measurements are taken to be generally reliable. We conducted a user study (n=40) to test this by constructing a concept-based topic knowledge representation for each participant and then comparing it with the participant judgment of their topic knowledge elicited with Likert-scale questions. The tasks were in the genomics domain and knowledge representations were constructed from the MeSH thesaurus terms that indexed relevant documents for five topics. The participants rated their familiarity with the topic, the anticipated task difficulty, the amount of learning gained during the task, and made other knowledge-related judgments associated with the task. Although there is considerable variability over individuals, the results provide evidence that these selfassessed topic knowledge measures are correlated in the expected way with the independently-constructed topic knowledge measure. We argue the results provide evidence for the general validity of topic knowledge selfassessment and discuss ways to further explore knowledge self-assessment and its reliability for prediction of individual knowledge levels.
In this demonstration, we will show a context-aware information system intended for mobile users. The demonstration involves special-purpose hardware devices, called 'context tags', which can work with mobile devices such as mobile phones, to provide ambient information to users on the move. Key to the framework is special support for content service providers, through software that allows existing content to be delivered seamlessly to mobile devices, as and when it is needed by users. The demonstration will show how these components work together to provide an effective ambient information system for mobile users.
We present a generic model for multimodal information retrieval, leveraging di↵erent information sources to improve the e↵ectiveness of a retrieval system. The proposed method is able to take into account both explicit and latent semantics present in the data and can be used to answer complex queries, not currently answerable neither by document retrieval systems, nor by semantic web systems. By providing a hybrid approach combining IR and structured search techniques, we prepare a framework applicable to multimodal data collections. To test its e↵ectiveness, we instantiate the model for an image retrieval task.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.