This paper discusses the use of the computer vision in the interpretation of human gestures. Hand gestures would be an intuitive and ideal way of exchanging information with other people in a virtual space, guiding some robots to perform certain tasks in a hostile environment, or interacting with computers. Hand gestures can be divided into two main categories: static gestures and dynamic gestures. In this paper, a novel dynamic hand gesture recognition technique is proposed. It is based on the 2D skeleton representation of the hand. For each gesture, the hand skeletons of each posture are superposed providing a single image which is the dynamic signature of the gesture. The recognition is performed by comparing this signature with the ones from a gesture alphabet, using Baddeley's distance as a measure of dissimilarities between model parameters
In this paper we present a video summarization method based on the study of spatio-temporal activity within the video. The visual activity is estimated by measuring the number of interest points, jointly obtained in the spatial and temporal domains. The proposed approach is composed of five steps. First, image features are collected using the spatio-temporal Hessian matrix. Then, these features are processed to retrieve the candidate video segments for the summary (denoted clips). Further on, two specific steps are designed to first detect the redundant clips, and second to eliminate the clapperboard images. The final step consists in the construction of the final summary which is performed by retaining the clips showing the highest level of activity. The proposed approach was tested on the BBC Rushes Summarization task within the TRECVID 2008 campaign.
As of today, most movie recommendation services base their recommendations on collaborative filtering (CF) and/or content-based filtering (CBF) models that use metadata (e.g., genre or cast). In most video-on-demand and streaming services, however, new movies and TV series are continuously added. CF models are unable to make predictions in such a scenario, since the newly added videos lack interactions-a problem technically known as new item cold start (CS). Currently, the most common approach to this problem is to switch to a purely CBF method, usually by exploiting textual metadata. This approach is known to have lower accuracy than CF because it ignores useful collaborative information and relies on human-generated textual metadata, which are expensive to collect and often prone to errors. User-generated content, such as tags, can also be rare or absent in CS situations. In this paper, we introduce a new movie recommender system that addresses the new item problem in the movie domain by (i) integrating state-of-the-art audio and visual descriptors, which can be automatically extracted from video content and constitute what we call the movie genome; (ii) exploiting an effective data fusion method named canonical correlation analysis, which was successfully tested in our previous works Deldjoo et alto better exploit complementary information between different modalities; (iii) proposing a two-step hybrid approach which trains a CF model on warm items (items with interactions) and leverages the learned model on the movie genome to recommend cold items (items without interactions). Experimental validation is carried out using a system-centric study on a large-scale, real-world movie recommendation dataset both in an absolute cold start and in a cold to warm transition; and a user-centric online experiment measuring different subjective aspects, such as satisfaction and diversity. Results show the benefits of this approach compared to existing approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.