This work presents a series of demonstrations of our self-reconfigurable modular robots (SRMR) "Roombots" in the context of adaptive and assistive furniture. In literature, simulations are often ahead of what currently can be demonstrated in hardware with such systems due to significant challenges in transferring them to the real world. Here, we describe how Roombots tackled these difficulties in real hardware and focus qualitatively on selected hardware experiments rather than on quantitative measurements (in hardware and simulation) to showcase the many possibilities of an SRMR. We envision Roombots to be used in our living space and define five key tasks that such a system must possess. Consequently, we demonstrate these tasks, including self-reconfiguration with 12 modules (36 Degrees of Freedom), autonomously moving furniture, object manipulation and gripping capabilities, human-module-interaction and the development of an easy-to-use user interface. We conclude with the remaining challenges and point out possible directions of research for the future of adaptive and assistive furniture with Roombots.
3D interaction in virtual reality often requires to manipulate and feel virtual objects with our fingers. Although existing haptic interfaces can be used for this purpose (e.g. force-feedback exoskeleton gloves), they are still bulky and expensive. In this paper, we introduce a novel multi-finger device called "FlexiFingers" that constrains each digit individually and produces elastic forcefeedback. FlexiFingers leverages passive haptics in order to offer a lightweight, modular, and affordable alternative to active devices. Moreover, we combine Flexifingers with a pseudo-haptic approach that simulates different levels of stiffness when interacting with virtual objects. We illustrate how this combination of passive haptics and pseudo-haptics can benefit multi-finger interaction through several use cases related to music learning and medical training. Those examples suggest that our approach could find applications in various domains that require an accessible and portable way of providing haptic feedback to the fingers.
This paper aims at showing the dynamic performance and reliability of the low-cost, open-access quadruped robot Solo-12, which is developed within the framework of Open Dynamic Robot Initiative. It presents the implementation of a state-of-the-art control pipeline, close to the one that was previously implemented on Mini Cheetah, which implements a model predictive controller based on the centroidal dynamics to compute desired contact forces in order to track a reference velocity. Different contributions are proposed to speed up the computation process, notably at the level of the state estimation and the whole body controller. Experimental results demonstrate that the robot closely follow the reference velocity while being highly reactive and able to recover from perturbations.
Quadruped robots have proved their robustness to cross complex terrain despite little environment knowledge. Yet advanced locomotion controllers are expected to take advantage of exteroceptive information. This paper presents a complete method to plan and control the locomotion of quadruped robots when 3D information about the surrounding obstacles is available, based on several stages of decision. We first propose a contact planner formulated as a mixed-integer program, optimized on-line at each new robot step. It selects a surface from a set of convex surfaces describing the environment for the next footsteps while ensuring kinematic constraints. We then propose to optimize the exact contact location and the feet trajectories at control frequency to avoid obstacles, thanks to an efficient formulation of quadratic programs optimizing Bezier curves. By relying on the locomotion controller of our quadruped robot Solo, we finally implement the complete method, provided as an open-source package. Its efficiency is asserted by statistical evaluation of the importance of each component in simulation, while the overall performances are demonstrated on various scenarios with the real robot.
State estimation, in particular estimation of the base position, orientation and velocity, plays a big role in the efficiency of legged robot stabilization. The estimation of the base state is particularly important because of its strong correlation with the underactuated dynamics, i.e. the evolution of center of mass and angular momentum. Yet this estimation is typically done in two phases, first estimating the base state, then reconstructing the center of mass from the robot model. The underactuated dynamics is indeed not properly observed, and any bias in the model would not be corrected from the sensors. While it has already been observed that force measurements make such a bias observable, these are often only used for a binary estimation of the contact state. In this paper, we propose to simultaneously estimate the base and the underactuation state by using all measurements simultaneously. To this end, we propose several contributions to implement a complete state estimator using factor graphs. Contact forces altering the underactuated dynamics are pre-integrated using a novel adaptation of the IMU pre-integration method, which constitutes the principal contribution. IMU pre-integration is also used to measure the positional motion of the base. Encoder measurements are then participating to the estimation in two ways: by providing leg odometry displacements, contributing to the observability of IMU biases; and by relating the positional and centroidal states, thus connecting the whole graph and producing a tightly-coupled whole-body estimator. The validity of the approach is demonstrated on real data captured by the Solo12 quadruped robot.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.