We present a model to estimate motion from monocular visual and inertial measurements. We analyze the model and characterize the conditions under which its state is observable, and its parameters are identifiable. These include the unknown gravity vector, and the unknown transformation between the camera coordinate frame and the inertial unit. We show that it is possible to estimate both state and parameters as part of an on-line procedure, but only provided that the motion sequence is "rich enough," a condition that we characterize explicitly. We then describe an efficient implementation of a filter to estimate the state and parameters of this model, including gravity and camera-to-inertial calibration. It runs in real-time on an embedded platform, and its performance has been tested extensively. We report experiments of continuous operation, without failures, re-initialization, or re-calibration, on paths of length up to 30Km. We also describe an integrated approach to "loop-closure," that is the recognition of previously-seen locations and the topological re-adjustment of the traveled path. It represents visual features relative to the global orientation reference provided by the gravity vector estimated by the filter, and relative to the scale provided by their known position within the map; these features are organized into "locations" defined by visibility constraints, represented in a topological graph, where loop closure can be performed without the need to re-compute past trajectories or perform bundle adjustment. The software infrastructure as well as the embedded platform is described in detail in a technical report (Jones and Soatto (2009).)
Abstract. We address the problem of modeling the spatial and temporal second-order statistics of video sequences that exhibit both spatial and temporal regularity, intended in a statistical sense. We model such sequences as dynamic multiscale autoregressive models, and introduce an efficient algorithm to learn the model parameters. We then show how the model can be used to synthesize novel sequences that extend the original ones in both space and time, and illustrate the power, and limitations, of the models we propose with a number of real image sequences.
The Mouse Atlas Project (MAP) aims to produce a framework for organizing and analyzing the large volumes of neuroscientific data produced by the proliferation of genetically modified animals. Atlases provide an invaluable aid in understanding the impact of genetic manipulations by providing a standard for comparison. We use a digital atlas as the hub of an informatics network, correlating imaging data, such as structural imaging and histology, with text-based data, such as nomenclature, connections, and references. We generated brain volumes using magnetic resonance microscopy (MRM), classical histology, and immunohistochemistry, and registered them into a common and defined coordinate system. Specially designed viewers were developed in order to visualize multiple datasets simultaneously and to coordinate between textual and image data. Researchers can navigate through the brain interchangeably, in either a text-based or image-based representation that automatically updates information as they move. The atlas also allows the independent entry of other types of data, the facile retrieval of information, and the straight-forward display of images. In conjunction with centralized servers, image and text data can be kept current and can decrease the burden on individual researchers' computers. A comprehensive framework that encompasses many forms of information in the context of anatomic imaging holds tremendous promise for producing new insights. The atlas and associated tools can be found at http://www.loni.ucla.edu/MAP.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.