Abstract-We present a framework for quadrupedal locomotion over highly challenging terrain where the choice of appropriate footholds is crucial for the success of the behaviour. We use a path planning approach which shares many similarities with the results of the DARPA Learning Locomotion challenge and extend it to allow more flexibility and increased robustness. During execution we incorporate an on-line forcebased foothold adaptation mechanism that updates the planned motion according to the perceived state of the environment. This way we exploit the active compliance of our system to smoothly interact with the environment, even when this is inaccurately perceived or dynamically changing, and update the planned path on-the-fly. In tandem we use a virtual model controller that provides the feed-forward torques that allow increased accuracy together with highly compliant behaviour on an otherwise naturally very stiff robotic system. We leverage the full set of benefits that a high performance torque controlled quadruped robot can provide and demonstrate the flexibility and robustness of our approach on a set of experimental trials of increasing difficulty.
International audienceWe present a mobile robot whose goal is to autonomously explore an unknown indoor environment and to build a semantic map containing high-level information similar to those extracted by humans. This information includes the rooms, their connectivity, the objects they contain and the material of the walls and ground. This robot was developed in order to participate in a French exploration and mapping contest called CAROTTE whose goal is to produce easily interpretable maps of an unknown environment. In particular we present our object detection approach based on a color+depth camera that fuse 3D, color and texture information through a neural network for robust object recognition. We also present the material recognition approach based on machine learning applied to vision. We demonstrate the performances of these modules on image databases and provide examples on the full system working in real environments
International audienceAbstract--In this article we present a new approach for object recognition in a robotic underwater context. Color is an attractive feature because of its simplicity and its robustness to scale changes, object positions and partial occlusions. Unfortunately, in the underwater medium, the colors are modified by attenuation and are not constant with the distance. To perform a color-based recognition of an object, we develop an algorithm robust with respect to the attenuation which takes into account the light modification during its path between the light source and the camera. Therefore, a given underwater object can be identified in an image by detecting all the colors compatible with its prior known color. Our method is fast, robust and needs a very few computers resources. We successfully used it when experimenting in the sea using a system we built. It is suitable for robotic applications where computers resources are limited and shared between various embedded devices. This novel concept enables the use of the color in many applications such as target interception, object tracking or obstacle detection
This paper presents a framework developed to increase the autonomy and versatility of a large (∼75kg) hydraulically actuated quadrupedal robot. It combines onboard perception with two locomotion strategies, a dynamic trot and a static crawl gait. This way the robot can perceive its environment and arbitrate between the two behaviours according to the situation at hand. All computations are performed on-board and are carried out in two separate computers, one handles the high-level processes while the other is concerned with the low-level hard real-time control. The perception and subsequently the appropriate gait modifications are performed autonomously. We present outdoor experimental trials of the robot trotting over unknown terrain, perceiving a large obstacle, altering its behaviour to the cautious crawl gait and stepping onto the obstacle. This allows the robot to locomote quickly on relatively flat terrain and gives the robot the ability to overcome large irregular obstacles when required.
Abstract-The human perception of the external world appears as a natural, immediate and effortless task. It is achieved through a number of "low-level" sensory-motor processes that provide a high-level representation adapted to complex reasoning and decision. Compared to these representations, mobile robots usually provide only low-level obstacle maps that lack such highlevel information. We present a mobile robot whose goal is to autonomously explore an unknown indoor environment and to build a semantic map containing high-level information similar to those extracted by humans and that will be rapidly and easily interpreted by users to assess the situation. This robot was developed under the Panoramic and Active Camera for Object Mapping (PACOM) 1 project whose goal is to participate in a French exploration and mapping contest called CAROTTE 2 . We will detail in particular how we integrated visual object recognition, room detection, semantic mapping, and exploration. We demonstrate the performances of our system in an indoor environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.