In most animal species, vision is mediated by compound eyes, which offer lower resolution than vertebrate single-lens eyes, but significantly larger fields of view with negligible distortion and spherical aberration, as well as high temporal resolution in a tiny package. Compound eyes are ideally suited for fast panoramic motion perception. Engineering a miniature artificial compound eye is challenging because it requires accurate alignment of photoreceptive and optical components on a curved surface. Here, we describe a unique design method for biomimetic compound eyes featuring a panoramic, undistorted field of view in a very thin package. The design consists of three planar layers of separately produced arrays, namely, a microlens array, a neuromorphic photodetector array, and a flexible printed circuit board that are stacked, cut, and curved to produce a mechanically flexible imager. Following this method, we have prototyped and characterized an artificial compound eye bearing a hemispherical field of view with embedded and programmable low-power signal processing, high temporal resolution, and local adaptation to illumination. The prototyped artificial compound eye possesses several characteristics similar to the eye of the fruit fly Drosophila and other arthropod species. This design method opens up additional vistas for a broad range of applications in which wide field motion detection is at a premium, such as collision-free navigation of terrestrial and aerospace vehicles, and for the experimental testing of insect vision theories.
Autonomous outdoor navigation requires reliable multisensory fusion strategies. Desert ants travel widely every day, showing unrivaled navigation performance using only a few thousand neurons. In the desert, pheromones are instantly destroyed by the extreme heat. To navigate safely in this hostile environment, desert ants assess their heading from the polarized pattern of skylight and judge the distance traveled based on both a stride-counting method and the optic flow, i.e., the rate at which the ground moves across the eye. This process is called path integration (PI). Although many methods of endowing mobile robots with outdoor localization have been developed recently, most of them are still prone to considerable drift and uncertainty. We tested several ant-inspired solutions to outdoor homing navigation problems on a legged robot using two optical sensors equipped with just 14 pixels, two of which were dedicated to an insect-inspired compass sensitive to ultraviolet light. When combined with two rotating polarized filters, this compass was equivalent to two costly arrays composed of 374 photosensors, each of which was tuned to a specific polarization angle. The other 12 pixels were dedicated to optic flow measurements. Results show that our ant-inspired methods of navigation give precise performances. The mean homing error recorded during the overall trajectory was as small as 0.67% under lighting conditions similar to those encountered by ants. These findings show that ant-inspired PI strategies can be used to complement classical techniques with a high level of robustness and efficiency.
The aerial robot presented here for the first time was based on a quadrotor structure, which is capable of unique morphing performances based on an actuated elastic mechanism. Like birds, which are able to negotiate narrow apertures despite their relatively large wingspan, our Quad-Morphing robot was able to pass through a narrow gap at a high forward speed of 2.5 m.s− 1 by swiftly folding up the structure supporting its propellers. A control strategy was developed to deal with the loss of controllability on the roll axis resulting from the folding process, while keeping the robot stable until it has crossed the gap. In addition, a complete recovery procedure was also implemented to stabilize the robot after the unfolding process. A new metric was also used to quantify the gain in terms of the gap-crossing ability in comparison with that observed with classical quadrotors with rigid bodies. The performances of these morphing robots are presented, and experiments performed with a real flying robot passing through a small aperture by reducing its wingspan by 48% are described and discussed.
In 1986, Franceschini et al. built an optronic velocity sensor [11], the principle of which was based on the findings they had recently made on fly EMDs by performing electrophysiological recordings on single neurons while concomitantly applying optical microstimuli to single photoreceptor cells [12]. As early as 1989, a battery of 110 velocity sensors of this kind was used to enable a small autonomous mobile robot to steer its way through an unknown field full of obstacles at a relatively high speed (50 cm/s), based on optic flow measurements [1-2]. Later on, several electronic EMDs based on sensors such as the Reichardt correlation sensor [13] or Franceschini et al's 1986 velocity sensor [14] were developed to serve as smart VLSI circuits.
Here we present the first systematic comparison between the visual guidance behaviour of a biomimetic robot and those of honeybees flying in similar environments. We built a miniature hovercraft which can travel safely along corridors with various configurations. For the first time, we implemented on a real physical robot the 'lateral optic flow regulation autopilot', which we previously studied computer simulations. This autopilot inspired by the results of experiments on various species of hymenoptera consists of two intertwined feedback loops, the speed and lateral control loops, each of which has its own optic flow (OF) set-point. A heading-lock system makes the robot move straight ahead as fast as 69 cm s(-1) with a clearance from one wall as small as 31 cm, giving an unusually high translational OF value (125° s(-1)). Our biomimetic robot was found to navigate safely along straight, tapered and bent corridors, and to react appropriately to perturbations such as the lack of texture on one wall, the presence of a tapering or non-stationary section of the corridor and even a sloping terrain equivalent to a wind disturbance. The front end of the visual system consists of only two local motion sensors (LMS), one on each side. This minimalistic visual system measuring the lateral OF suffices to control both the robot's forward speed and its clearance from the walls without ever measuring any speeds or distances. We added two additional LMSs oriented at +/-45° to improve the robot's performances in stiffly tapered corridors. The simple control system accounts for worker bees' ability to navigate safely in six challenging environments: straight corridors, single walls, tapered corridors, straight corridors with part of one wall moving or missing, as well as in the presence of wind.
SUMMARYFlying insects keep their visual system horizontally aligned, suggesting that gaze stabilization is a crucial first step in flight control. Unlike flies, hymenopteran insects such as bees and wasps do not have halteres that provide fast, feed-forward angular rate information to stabilize head orientation in the presence of body rotations. We tested whether hymenopteran insects use inertial (mechanosensory) information to control head orientation from other sources, such as the wings, by applying periodic roll perturbations to male Polistes humilis wasps flying in tether under different visual conditions indoors and in natural outdoor conditions. We oscillated the thorax of the insects with frequency-modulated sinusoids (chirps) with frequencies increasing from 0.2 to 2Hz at a maximal amplitude of 50deg peak-to-peak and maximal angular velocity of ±245degs -1 . We found that head roll stabilization is best outdoors, but completely absent in uniform visual conditions and in darkness.Step responses confirm that compensatory head roll movements are purely visually driven. Modelling step responses indicates that head roll stabilization is achieved by merging information on head angular velocity, presumably provided by motion-sensitive neurons and information on head orientation, presumably provided by light level integration across the compound eyes and/or ocelli (dorsal light response). Body roll in free flight reaches amplitudes of ±40deg and angular velocities greater than 1000degs -1 , while head orientation remains horizontal for most of the time to within ±10deg. In free flight, we did not find a delay between spontaneous body roll and compensatory head movements, and suggest that this is evidence for the contribution of a feed-forward control to head stabilization. Supplementary material available online at
In this paper, we present: (i) a novel analog silicon retina featuring auto-adaptive pixels that obey the Michaelis-Menten law, i.e. V=V(m) I(n)/I(n)+σ(n); (ii) a method of characterizing silicon retinas, which makes it possible to accurately assess the pixels' response to transient luminous changes in a ±3-decade range, as well as changes in the initial steady-state intensity in a 7-decade range. The novel pixel, called M(2)APix, which stands for Michaelis-Menten Auto-Adaptive Pixel, can auto-adapt in a 7-decade range and responds appropriately to step changes up to ±3 decades in size without causing any saturation of the Very Large Scale Integration (VLSI) transistors. Thanks to the intrinsic properties of the Michaelis-Menten equation, the pixel output always remains within a constant limited voltage range. The range of the Analog to Digital Converter (ADC) was therefore adjusted so as to obtain a Least Significant Bit (LSB) voltage of 2.35mV and an effective resolution of about 9 bits. The results presented here show that the M(2)APix produced a quasi-linear contrast response once it had adapted to the average luminosity. Differently to what occurs in its biological counterparts, neither the sensitivity to changes in light nor the contrast response of the M(2)APix depend on the mean luminosity (i.e. the ambient lighting conditions). Lastly, a full comparison between the M(2)APix and the Delbrück auto-adaptive pixel is provided.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.