Periodicities (repeating patterns) are observed in many human behaviors. Their strength may capture untapped patterns that incorporate sleep, sedentary, and active behaviors into a single metric indicative of better health. We present a framework to detect periodicities from longitudinal wrist-worn accelerometry data. GENEActiv accelerometer data were collected from 20 participants (17 men, 3 women, aged 35–65) continuously for 64.4 ± 26.2 (range: 13.9 to 102.0) consecutive days. Cardiometabolic risk biomarkers and health-related quality of life metrics were assessed at baseline. Periodograms were constructed to determine patterns emergent from the accelerometer data. Periodicity strength was calculated using circular autocorrelations for time-lagged windows. The most notable periodicity was at 24 h, indicating a circadian rest-activity cycle; however, its strength varied significantly across participants. Periodicity strength was most consistently associated with LDL-cholesterol (r's = 0.40–0.79, P's < 0.05) and triglycerides (r's = 0.68–0.86, P's < 0.05) but also associated with hs-CRP and health-related quality of life, even after adjusting for demographics and self-rated physical activity and insomnia symptoms. Our framework demonstrates a new method for characterizing behavior patterns longitudinally which captures relationships between 24 h accelerometry data and health outcomes.
The VideoCLEF track, introduced in 2008, aims to develop and evaluate tasks related to analysis of and access to multilingual multimedia content. In its first year, VideoCLEF piloted the Vid2RSS task, whose main subtask was the classification of dual language video (Dutchlanguage television content featuring English-speaking experts and studio guests). The task offered two additional discretionary subtasks: feed translation and automatic keyframe extraction. Task participants were supplied with Dutch archival metadata, Dutch speech transcripts, English speech transcripts and 10 thematic category labels, which they were required to assign to the test set videos. The videos were grouped by class label into topic-based RSS-feeds, displaying title, description and keyframe for each video.Five groups participated in the 2008 VideoCLEF track. Participants were required to collect their own training data; both Wikipedia and general web content were used. Groups deployed various classifiers (SVM, Naive Bayes and k-NN) or treated the problem as an information retrieval task. Both the Dutch speech transcripts and the archival metadata performed well as sources of indexing features, but no group succeeded in exploiting combinations of feature sources to significantly enhance performance. A small scale fluency/adequacy evaluation of the translation task output revealed the translation to be of sufficient quality to make it valuable to a non-Dutch speaking English speaker. For keyframe extraction, the strategy chosen was to select the keyframe from the shot with the most representative speech transcript content. The automatically selected shots were shown, with a small user study, to be competitive with manually selected shots. Future years of VideoCLEF will aim to expand the corpus and the class label list, as well as to extend the track to additional tasks.
VideoCLEF 2009 offered three tasks related to enriching video content for improved multimedia access in a multilingual environment. For each task, video data (Dutch-language television, predominantly documentaries) accompanied by speech recognition transcripts were provided. The Subject Classification Task involved automatic tagging of videos with subject theme labels. The best performance was achieved by approaching subject tagging as an information retrieval task and using both speech recognition transcripts and archival metadata. Alternatively, classifiers were trained using either the training data provided or data collected from Wikipedia or via general Web search. The Affect Task involved detecting narrative peaks, defined as points where viewers perceive heightened dramatic tension. The task was carried out on the "Beeldenstorm" collection containing 45 short-form documentaries on the visual arts. The best runs exploited affective vocabulary and audience directed speech. Other approaches included using topic changes, elevated speaking pitch, increased speaking intensity and radical visual changes. The Linking Task, also called "Finding Related Resources Across Languages," involved linking video to material on the same subject in a different language. Participants were provided with a list of multimedia anchors (short video segments) in the Dutch-language "Beeldenstorm" collection and were expected to return target pages drawn from English-language Wikipedia. The best performing methods used the transcript of the speech spoken during the multimedia anchor to build a query to search an index of the Dutchlanguage Wikipedia. The Dutch Wikipedia pages returned were used to identify related English pages. Participants also experimented with pseudo-relevance feedback, query translation and methods that targeted proper names.
Abstract-Lifelogging is the ambient, continuous digital recording of a person's everyday activities for a variety of possible applications. Much of the work to date in lifelogging has focused on developing sensors, capturing information, processing it into events and then supporting event-based access to the lifelog for applications like memory recall, behaviour analysis or similar. With the recent arrival of aggregating platforms such as Apple's HealthKit, Microsoft's HealthVault and Google's Fit, we are now able to collect and aggregate data from lifelog sensors, to centralize the management of data and in particular to search for and detect patterns of usage for individuals and across populations. In this paper, we present a framework that detects both lowlevel and high-level periodicity in lifelog data, detecting hidden patterns of which users would not otherwise be aware. We detect periodicities of time series using a combination of correlograms and periodograms, using various signal processing algorithms. Periodicity detection in lifelogs is particularly challenging because the lifelog data itself is not always continuous and can have gaps as users may use their lifelog devices intermittingly. To illustrate that periodicity can be detected from such data, we apply periodicity detection on three lifelog datasets with varying levels of completeness and accuracy.
This paper describes a multimedia multimodal information access sub-system (MIAS) for digital audio-visual documents, typically presented in streaming media format. The system is designed to provide both professional and general users with entry points into video documents that are relevant to their information needs. In this work, we focus on the information needs of multimedia specialists at a Dutch cultural heritage institution with a large multimedia archive. A quantitative and qualitative assessment is made of the efficiency of search operations using our multimodal system and it is demonstrated that MIAS significantly facilitates information retrieval operations when searching within a video document.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.