Animals relocating a target corner in a rectangular space often make rotational errors searching not only at the target corner but also at the diagonally opposite corner. The authors tested whether view-based navigation can explain rotational errors by recording panoramic snapshots at regularly spaced locations in a rectangular box. The authors calculated the global image difference between the image at each location and the image recorded at a target location in 1 of the corners, thus creating a 2-dimensional map of image differences. The authors found the most pronounced minima of image differences at the target corner and the diagonally opposite corner-conditions favoring rotational errors. The authors confirmed these results in virtual reality simulations and showed that the relative salience of different visual cues determines whether image differences are dominated by geometry or by features. The geometry of space is thus implicitly contained in panoramic images and does not require explicit computation by a dedicated module. A testable prediction is that animals making rotational errors in rectangular spaces are guided by remembered views.
SUMMARYVisual landmarks guide humans and animals including insects to a goal location. Insects, with their miniature brains, have evolved a simple strategy to find their nests or profitable food sources; they approach a goal by finding a close match between the current view and a memorised retinotopic representation of the landmark constellation around the goal. Recent implementations of such a matching scheme use raw panoramic images ('image matching') and show that it is well suited to work on robots and even in natural environments. However, this matching scheme works only if relevant landmarks can be detected by their contrast and texture. Therefore, we tested how honeybees perform in localising a goal if the landmarks can hardly be distinguished from the background by such cues. We recorded the honeybees' flight behaviour with high-speed cameras and compared the search behaviour with computer simulations. We show that honeybees are able to use landmarks that have the same contrast and texture as the background and suggest that the bees use relative motion cues between the landmark and the background. These cues are generated on the eyes when the bee moves in a characteristic way in the vicinity of the landmarks. This extraordinary navigation performance can be explained by a matching scheme that includes snapshots based on optic flow amplitudes ('optic flow matching'). This new matching scheme provides a robust strategy for navigation, as it depends primarily on the depth structure of the environment.Supplementary material available online at http://jeb.biologists.org/cgi/content/full/213/17/2913/DC1 Key words: honeybee, landmark navigation, snapshot matching, vision. THE JOURNAL OF EXPERIMENTAL BIOLOGY 2914be unnecessary (Zeil et al., 2003;Stürzl and Zeil, 2007). Zeil et al. show that the similarities between panoramic images of natural environments decrease smoothly with spatial distance between an observer and the goal location (Zeil et al., 2003). An animal that is sensitive to the similarity of views relative to the memorised view of the goal location could return to this location by maximising the similarities between images [modelled by simple image similarity gradient methods (Zeil et al., 2003)]. Thus, panoramic image similarities can be used for view-based homing in natural environments. Recently, the behaviour of ants and crickets in goal-finding tasks could be explained by 'image matching ' (Wystrach and Beugnon, 2009;Mangan and Webb, 2009).In our combined behavioural and modelling approach, we tested the content of the spatial memory in honeybees during complex navigational tasks. Honeybees were trained to locate an inconspicuous feeder surrounded by three cylinders, which we refer to as landmarks. By altering the spatial configuration and landmark texture and monitoring the approach flights to the feeder, we addressed the following questions: what role does the spatial configuration of the landmarks play? Does landmark texture play a role in navigational tasks? In particular, can landmarks b...
Panoramic image differences can be used for view-based homing under natural outdoor conditions, because they increase smoothly with distance from a reference location (Zeil et al., J Opt Soc Am A 20(3):450-469, 2003). The particular shape, slope and depth of such image difference functions (IDFs) recorded at any one place, however, depend on a number of factors that so far have only been qualitatively identified. Here we show how the shape of difference functions depends on the depth structure and the contrast of natural scenes, by quantifying the depthdistribution of different outdoor scenes and by comparing it to the difference functions calculated with differently processed panoramic images, which were recorded at the same locations. We find (1) that IDFs and catchment areas become systematically wider as the average distance of objects increases, (2) that simple image processing operations-like subtracting the local mean, difference-of-Gaussian filtering and local contrast normalization-make difference functions robust against changes in illumination and the spurious effects of shadows, and (3) by comparing depth-dependent translational and depth-independent rotational difference functions, we show that IDFs of contrast-normalized snapshots are predominantly determined by the depth-structure and possibly also by occluding contours in a scene. We propose a model for the shape of IDFs as a tool for quantitative comparisons between the shapes of these functions in different scenes.
Nesting insects perform learning flights to establish a visual representation of the nest environment that allows them to subsequently return to the nest. It has remained unclear when insects learn what during these flights, what determines their overall structure, and, in particular, how what is learned is used to guide an insect's return. We analyzed learning flights in ground-nesting wasps (Sphecidae: Cerceris australis) using synchronized high-speed cameras to determine 3D head position and orientation. Wasps move along arcs centered on the nest entrance, whereby rapid changes in gaze assure that the nest is seen at lateral positions in the left or the right visual field. Between saccades, the wasps translate along arc segments around the nest while keeping gaze fixed. We reconstructed panoramic views along the paths of learning and homing wasps to test specific predictions about what wasps learn during their learning flights and how they use this information to guide their return. Our evidence suggests that wasps monitor changing views during learning flights and use the differences they experience relative to previously encountered views to decide when to begin a new arc. Upon encountering learned views, homing wasps move left or right, depending on the nest direction associated with that view, and in addition appear to be guided by features on the ground close to the nest. We test our predictions on how wasps use views for homing by simulating homing flights of a virtual wasp guided by views rendered in a 3D model of a natural wasp environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.