Background and ObjectivesThe choroid plexus has been shown to play a crucial role in CNS inflammation. Previous studies found larger choroid plexus in multiple sclerosis (MS) compared with healthy controls. However, it is not clear whether the choroid plexus is similarly involved in MS and in neuromyelitis optica spectrum disorder (NMOSD). Thus, the aim of this study was to compare the choroid plexus volume in MS and NMOSD.MethodsIn this retrospective, cross-sectional study, patients were included by convenience sampling from 4 international MS centers. The choroid plexus of the lateral ventricles was segmented fully automatically on T1-weighted MRI sequences using a deep learning algorithm (Multi-Dimensional Gated Recurrent Units). Uni- and multivariable linear models were applied to investigate associations between the choroid plexus volume, clinically meaningful disease characteristics, and MRI parameters.ResultsWe studied 180 patients with MS and 98 patients with NMOSD. In total, 94 healthy individuals and 47 patients with migraine served as controls. The choroid plexus volume was larger in MS (median 1,690 µL, interquartile range [IQR] 648 µL) than in NMOSD (median 1,403 µL, IQR 510 µL), healthy individuals (median 1,533 µL, IQR 570 µL), and patients with migraine (median 1,404 µL, IQR 524 µL; all p < 0.001), whereas there was no difference between NMOSD, migraine, and healthy controls. This was also true when adjusted for age, sex, and the intracranial volume. In contrast to NMOSD, the choroid plexus volume in MS was associated with the number of T2-weighted lesions in a linear model adjusted for age, sex, total intracranial volume, disease duration, relapses in the year before MRI, disease course, Expanded Disability Status Scale score, disease-modifying treatment, and treatment duration (beta 4.4; 95% CI 0.78–8.1; p = 0.018).DiscussionThis study supports an involvement of the choroid plexus in MS in contrast to NMOSD and provides clues to better understand the respective pathogenesis.
The Video Browser Showdown addresses difficult video search challenges through an annual interactive evaluation campaign attracting research teams focusing on interactive video retrieval. The campaign aims to provide insights into the performance of participating interactive video retrieval systems, tested by selected search tasks on large video collections. For the first time in its ten year history, the Video Browser Showdown 2021 was organized in a fully remote setting and hosted a record number of sixteen scoring systems. In this paper, we describe the competition setting, tasks and results and give an overview of state-of-the-art methods used by the competing systems. By looking at query result logs provided by ten systems, we analyze differences in retrieval model performances and browsing times before a correct submission. Through advances in data gathering methodology and tools, we provide a comprehensive analysis of ad-hoc video search tasks, discuss results, task design and methodological challenges. We highlight that almost all top performing systems utilize some sort of joint embedding for text-image retrieval and enable specification of temporal context in queries for known-item search. Whereas a combination of these techniques drive the currently top performing systems, we identify several future challenges for interactive video search engines and the Video Browser Showdown competition itself.
The multimodal nature of lifelog data collections poses unique challenges for multimedia management and retrieval systems. The Lifelog Search Challenge (LSC) offers an annual evaluation platform for such interactive retrieval systems. They compete against one another in finding items of interest within a set time frame.In this paper, we present the multimedia retrieval system vitrivr-VR, the latest addition to the vitrivr stack, which participated in the LSC in recent years. vitrivr-VR leverages the 3D space in virtual reality (VR) to offer novel retrieval and user interaction models, which we describe with a special focus on design decisions taken for the participation in the LSC.
In the research of video retrieval systems, comparative assessments during dedicated retrieval competitions provide priceless insights into the performance of individual systems. The scope and depth of such evaluations are unfortunately hard to improve, due to the limitations by the set-up costs, logistics, and organization complexity of large events. We show that this easily impairs the statistical significance of the collected results, and the reproducibility of the competition outcomes. In this article, we present a methodology for remote comparative evaluations of content-based video retrieval systems and demonstrate that such evaluations scale-up to sizes that reliably produce statistically robust results, and propose additional measures that increase the replicability of the experiment. The proposed remote evaluation methodology forms a major contribution toward open science in interactive retrieval benchmarks. At the same time, the detailed evaluation reports form an interesting source of new observations about many subtle, previously inaccessible aspects of video retrieval.
The Lifelog Search Challenge (LSC) is an annual benchmarking competition for interactive multimedia retrieval systems, where participating systems compete in finding events based on textual descriptions containing hints about structured, semi-structured, and/or unstructured data. In this paper, we present the multimedia retrieval system vitrivr, a long-time participant to LSC, with a focus on new functionality. Specifically, we introduce the image stabilisation module which is added prior to the feature extraction to reduce the image degradation caused by lifelogger movements, and discuss how geodata is used during query formulation, query execution, and result presentation. CCS CONCEPTS• Information systems → Search interfaces; Image search; Users and interactive retrieval; Multimedia databases; Information retrieval; • Human-centered computing → Interactive systems and tools.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.