Other than from its sensing and processing capabilities, a mobile robotic platform can be limited in its use by its ability to move in the environment. Legs, tracks and wheels are all efficient means of ground locomotion that are most suitable in different situations. Legs allow to climb over obstacles and change the height of the robot, modifying its viewpoint of the world. Tracks are efficient on uneven terrains or on soft surfaces (snow, mud, etc.), while wheels are optimal on flat surfaces. Our objective is to work on a new concept capable of combining different locomotion mechanisms to increase the locomotion capabilities of the robotic platform. The design we came up with, called AZIMUT, is symmetrical and is made of four independent leg-track-wheel articulations. It can move with its articulations up, down or straight, allowing the robot to deal with three-dimensional environments. AZIMUT is also capable of moving sideways without changing its orientation, making it omnidirectional. By putting sensors on these articulations, the robot can also actively perceive its environment by changing the orientation of its articulations. Designing a robot with such capabilities requires addressing difficult design compromises, with measurable impacts seen only after integrating all of the components together. Modularity at the structural, hardware and embedded software levels, all considered concurrently in an iterative design process, reveals to be key in the design of sophisticated mobile robotic platforms.
Abslmcf-AZIMUT is a mobile robotic platform that combines wheels, legs and tracks to move in thw-dimensional environments. The robot is symmetrical and is made of four independent leg-track-wheel articulations. It can move with its articulations up, down or straight, or to move sideways without changing the robot's orientation. To validate the concept, the first prototype developed measures 70.5 cm x 705 cm with the articulations up. It has a body clearance of 8.4 cm to 40.6 cm depending on the position of the articulations. The design of the robot is highly modular, with distributed embedded systems to control the different components of the robot (see video).
Designing robots that interact naturally with people requires the integration of technologies and algorithms for communication modalities such as gestures, movement, facial expressions and user interfaces. To understand interdependence among these modalities, evaluating the integrated design in feasibility studies provides insights about key considerations regarding the robot and potential interaction scenarios, allowing the design to be iteratively refined before larger-scale experiments are planned and conducted. This paper presents three feasibility studies with IRL-1, a new humanoid robot integrating compliant actuators for motion and manipulation along with artificial audition, vision, and facial expressions. These studies explore distinctive capabilities of IRL-1, including the ability to be physically guided by perceiving forces through elastic actuators used for active steering of the omnidirectional platform; the integration of vision, motion and audition for an augmented telepresence interface; and the influence of delays in responding to sounds. In addition to demonstrating how these capabilities can be exploited in human-robot interaction, this paper illustrates intrinsic interrelations between design and evaluation of IRL-1, such as the influence of the contact point in physically guiding the platform, the synchronization between sensory and robot representations in the graphical display, and facial gestures for responsiveness when computationally expensive processes are used. It also outlines ideas regarding more advanced experiments that could be conducted with the platform.
The field of robotics has made steady progress in the pursuit of bringing autonomous machines into real-life settings. Over the last 3 years, we have seen omnidirectional humanoid platforms that now bring compliance, robustness and adaptiveness to handle the unconstrained situations of the real world. However, today’s contributions mostly address only a portion of the physical, cognitive or evaluative dimensions, which are all interdependent. This paper presents an overview of our attempt to integrate as a whole all three dimensions into a robot named Johnny-0. We present Johnny-0’s distinct contributions in simultaneously exploiting compliance at the locomotion level, in grounding reasoning and actions through behaviors, and in considering all possible factors experimenting in the wildness of the real world.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.