The theory of general relativity describes macroscopic phenomena driven by the influence of gravity while quantum mechanics brilliantly accounts for microscopic effects. Despite their tremendous individual success, a complete unification of fundamental interactions is missing and remains one of the most challenging and important quests in modern theoretical physics. The STE-QUEST satellite mission, proposed as a medium-size mission within the Cosmic Vision program of the European Space Agency (ESA), aims for testing general relativity with high precision in two experiments by performing a measurement of the gravitational redshift of the Sun and the Moon by comparing terrestrial clocks, and by performing a test of the Universality of Free Fall of matter waves in the gravitational field of Earth comparing the trajectory of two Bose-Einstein condensates of 85 Rb and 87 Rb. The two ultracold atom clouds are monitored very precisely thanks to techniques of atom interferometry. This allows to reach down to an uncertainty in the Eötvös parameter of at least 2 · 10 −15 . In this paper, we report about the results of the phase A mission study of the atom interferometer instrument covering the description of the main payload elements, the atomic source concept, and the systematic error sources.
In the past decade, a large number of robots has been built that explicitly implement biological navigation behaviours. We review these biomimetic approaches using a framework that allows for a common description of biological and technical navigation behaviour. The review shows that biomimetic systems make significant contributions to two fields of research: First, they provide a real world test of models of biological navigation behaviour; second, they make new navigation mechanisms available for technical applications, most notably in the field of indoor robot navigation. While simpler insect navigation behaviours have been implemented quite successfully, the more complicated way-finding capabilities of vertebrates still pose a challenge to current systems.
In homing tasks, the goal is often not marked by visible objects but must be inferred from the spatial relation to the visual cues in the surrounding scene. The exact computation of the goal direction would require knowledge about the distances to visible landmarks, information, which is not directly available to passive vision systems. However, if prior assumptions about typical distance distributions are used, a snapshot taken at the goal suffices to compute the goal direction from the current view. We show that most existing approaches to scene-based homing implicitly assume an isotropic landmark distribution. As an alternative, we propose a homing scheme that uses parameterized displacement fields. These are obtained from an approximation that incorporates prior knowledge about perspective distortions of the visual environment. A mathematical analysis proves that both approximations do not prevent the schemes from approaching the goal with arbitrary accuracy, but lead to different errors in the computed goal direction. Mobile robot experiments are used to test the theoretical predictions and to demonstrate the practical feasibility of the new approach
The receptive field organization of a class of visual interneurons in the fly brain (vertical system, or VS neurons) shows a striking similarity to certain self-motion-induced optic flow fields. The present study compares the measured motion sensitivities of the VS neurons (Krapp et al. 1998) to a matched filter model for optic flow fields generated by rotation or translation. The model minimizes the variance of the filter output caused by noise and distance variability between different scenes. To that end, prior knowledge about distance and self-motion statistics is incorporated in the form of a "world model". We show that a special case of the matched filter model is able to predict the local motion sensitivities observed in some VS neurons. This suggests that their receptive field organization enables the VS neurons to maintain a consistent output when the same type of self-motion occurs in different situations.
The human visual system is foveated, that is, outside the central visual field resolution and acuity drop rapidly. Nonetheless much of a visual scene is perceived after only a few saccadic eye movements, suggesting an effective strategy for selecting saccade targets. It has been known for some time that local image structure at saccade targets influences the selection process. However, the question of what the most relevant visual features are is still under debate. Here we show that center-surround patterns emerge as the optimal solution for predicting saccade targets from their local image structure. The resulting model, a one-layer feed-forward network, is surprisingly simple compared to previously suggested models which assume much more complex computations such as multi-scale processing and multiple feature channels. Nevertheless, our model is equally predictive. Furthermore, our findings are consistent with neurophysiological hardware in the superior colliculus. Bottom-up visual saliency may thus not be computed cortically as has been thought previously.
We present a purely vision-based scheme for learning a topological representation of an open environment. The system represents selected places by local views of the surrounding scene, and finds traversable paths between them. The set of recorded views and their connections are combined into a graph model of the environment. To navigate between views connected in the graph, we employ a homing strategy inspired by findings of insect ethology. In robot experiments, we demonstrate that complex visual exploration and navigation tasks can thus be performed without using metric information.
Let X be a "nice" space with an action of a torus T. We consider the Atiyah-Bredon sequence of equivariant cohomology modules arising from the filtration of X by orbit dimension. We show that a front piece of this sequence is exact if and only if the H^*(BT)-module H_T^*(X) is a certain syzygy. Moreover, we express the cohomology of that sequence as an Ext module involving a suitably defined equivariant homology of X. One consequence is that the GKM method for computing equivariant cohomology applies to a Poincare duality space if and only if the equivariant Poincare pairing is perfect.Comment: 23 pages. The former Section 6 has been substantially expanded and is now a separate paper, see arXiv:1303.1146. Several minor change
Volterra and Wiener series are perhaps the best-understood nonlinear system representations in signal processing. Although both approaches have enjoyed a certain popularity in the past, their application has been limited to rather low-dimensional and weakly nonlinear systems due to the exponential growth of the number of terms that have to be estimated. We show that Volterra and Wiener series can be represented implicitly as elements of a reproducing kernel Hilbert space by using polynomial kernels. The estimation complexity of the implicit representation is linear in the input dimensionality and independent of the degree of nonlinearity. Experiments show performance advantages in terms of convergence, interpretability, and system sizes that can be handled.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.