This paper discusses an approach to the problem of annotating multimedia content. Our approach provides annotation as metadata for indexing, retrieval and semantic processing as well as content enrichment. We use an underlying model for structured multimedia descriptions and annotations, allowing the establishment of spatial, temporal and linking relationships. We discuss aspects related with documents and annotations used to guide the design of an application that allows annotations to be made with pen-based interaction with Tablet PCs. As a result, a video stream can be annotated at the same time that it is captured. Moreover, the annotation can be edited, extended or played back synchronously afterwards.
Live experiences such as meetings and lectures can be captured in instrumented environments towards producing hyperdocuments corresponding to the information presented. Given that a captured presentation is usually related to many others, users can use linking facilities to support the identification of associated contents. We propose that searching and recommending operations be integrated in instrumented environments to support the identification of links among contents of captured sessions during a live session, when the user has the focus of attention on the underlying contents. Moreover, the user should be able to decide when any relevant results should be attached as annotations to the document corresponding to the live session. We present the model and associated implementation that support linking everyday presentations.
This paper discusses an approach to the problem of annotating multimedia content. Our approach provides annotation as metadata for indexing, retrieval and semantic processing as well as content enrichment. We use an underlying model for structured multimedia descriptions and annotations, allowing the establishment of spatial, temporal and linking relationships. We discuss aspects related with documents and annotations used to guide the design of an application that allows annotations to be made with pen-based interaction with Tablet PCs. As a result, a video stream can be annotated during the capture. The annotation can be further edited, extended or played back synchronously.
A relevant issue on linking services based on information retrieval techniques is how to define scopes that delimit homogeneous information so as to obtain good results. An interesting way to achieve this is to provide such scope delimitation by means of context information. This process of linking can be tailored according to contextual constraints explicitly provided by users. We propose linking services enhanced with context information captured from everyday presentations. We present the LinkDigger Context Service, which creates hyperlinks following information obtained from users. As a result, different hypertexts can be defined upon the same information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.