Despite substantial advances in many different fields of neurorobotics in general, and biomimetic robots in particular, a key challenge is the integration of concepts: to collate and combine research on disparate and conceptually disjunct research areas in the neurosciences and engineering sciences. We claim that the development of suitable robotic integration platforms is of particular relevance to make such integration of concepts work in practice. Here, we provide an example for a hexapod robotic integration platform for autonomous locomotion. In a sequence of six focus sections dealing with aspects of intelligent, embodied motor control in insects and multipedal robots—ranging from compliant actuation, distributed proprioception and control of multiple legs, the formation of internal representations to the use of an internal body model—we introduce the walking robot HECTOR as a research platform for integrative biomimetics of hexapedal locomotion. Owing to its 18 highly sensorized, compliant actuators, light-weight exoskeleton, distributed and expandable hardware architecture, and an appropriate dynamic simulation framework, HECTOR offers many opportunities to integrate research effort across biomimetics research on actuation, sensory-motor feedback, inter-leg coordination, and cognitive abilities such as motion planning and learning of its own body size.
This review article aims to address common research questions in hexapod robotics. How can we build intelligent autonomous hexapod robots that can exploit their biomechanics, morphology, and computational systems, to achieve autonomy, adaptability, and energy efficiency comparable to small living creatures, such as insects? Are insects good models for building such intelligent hexapod robots because they are the only animals with six legs? This review article is divided into three main sections to address these questions, as well as to assist roboticists in identifying relevant and future directions in the field of hexapod robotics over the next decade. After an introduction in section (1), the sections will respectively cover the following three key areas: (2) biomechanics focused on the design of smart legs; (3) locomotion control; and (4) high-level cognition control. These interconnected and interdependent areas are all crucial to improving the level of performance of hexapod robotics in terms of energy efficiency, terrain adaptability, autonomy, and operational range. We will also discuss how the next generation of bioroboticists will be able to transfer knowledge from biology to robotics and vice versa.
Behavior-based robotics considers perception as a holistic process, strongly connected to behavioral needs of the robot. We present a bio-inspired framework for sensing-perception-action, applied to a roving robot in a random foraging task. Perception is here considered as a complex and emergent phenomenon where a huge amount of information coming from sensors is used to form an abstract and concise representation of the environment, useful to take a suitable action or sequence of actions. In this work a model for perceptual representation is formalized by means of RD-CNNs showing Turing patterns. They are used as attractive states for particular set of environmental conditions in order to associate, via a reinforcement learning, a proper action. Learning is also introduced at the afferent stage to shape the environment information according to the particular emerging pattern. The basins of attraction for the Turing patterns are so dynamically tuned by an unsupervised learning in order to form an internal, abstract and plastic representation of the environment, as recorded by the sensors.
In this paper, we introduce a network of spiking neurons devoted to navigation control. Three different examples, dealing with stimuli of increasing complexity, are investigated. In the first one, obstacle avoidance in a simulated robot is achieved through a network of spiking neurons. In the second example, a second layer is designed aiming to provide the robot with a target approaching system, making it able to move towards visual targets. Finally, a network of spiking neurons for navigation based on visual cues is introduced. In all cases, the robot was assumed to rely on some a priori known responses to low-level sensors (i.e., to contact sensors in the case of obstacles, to proximity target sensors in the case of visual targets, or to the visual target for navigation with visual cues). Based on their knowledge, the robot has to learn the response to high-level stimuli (i.e., range finder sensors or visual input). The biologically plausible paradigm of spike-timing-dependent plasticity (STDP) is included in the network to make the system able to learn high-level responses that guide navigation through a simple unstructured environment. The learning procedure is based on classical conditioning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.