An approach for adaptive shared control of an assistive manipulator is presented. A set of distributed collision and proximity sensors is used to aid in limiting collisions during direct control by the disabled user. Artificial neural networks adapt the use of the proximity sensors online, which limits movements in the direction of an obstacle before a collision occurs. The system learns by associating the different proximity sensors to the collision sensors where collisions are detected. This enables the user and the robot to adapt simultaneously and in real-time, with the objective of converging on a usage of the proximity sensors that increases performance for a given user, robot implementation and task-set. The system was tested in a controlled setting with a simulated 5 DOF assistive manipulator and showed promising reductions in the mean time on simplified manipulation tasks. It extends earlier work by showing that the approach can be applied to full multi-link manipulators.
Currently, sugar snap peas are harvested manually. In high-cost countries like Norway, such a labour-intensive practise implies particularly large costs for the farmer. Hence, automated alternatives are highly sought after. This project explored a concept for robotic autonomous identification and tracking of sugar snap pea pods. The approach was based on a combination of visible (VIS)-near infrared (NIR) reflection measurements and image analysis, along with visual servoing. A proof-of-concept harvesting platform was implemented by mounting a robotic arm with hand-mounted sensors on a mobile unit. The platform was tested under plastic greenhouse conditions on potted plants of the sugar snap pea variety Cascadia using LED-lights and a partial shade. The results showed that it was feasible to differentiate the pods from the surrounding foliage using the light reflection at the spectral range around 970 nm combined with elementary image segmentation and shape modelling methods. The proof-of-concept harvesting platform was tested on 48 representative agricultural environments comprising dense canopy, varying pod sizes, partial occlusions and different working distances. A set of 104 images were analysed during the teleoperation experiment. The true positive detection rate was 93% and 87% for images acquired at long distances and at close distances, respectively. The robot arm achieved a success rate of 54% for autonomous visual servoing to a pre-grasp pose around targeted pods on 22 untouched scenarios. This study shows the potential of developing a prototype robot for semi-automated sugar snap pea harvesting.
This paper presents a proof-of-concept platform for demonstrating robotic harvesting of summer-varieties of cauliflower, and early tests performed under laboratory conditions. The platform is designed to be modular and has two dexterous robotic arms with variablestiffness technology. The bi-manual configuration enables the separation of grasping and cutting behaviours into separate robot manipulators. By exploiting the passive compliance of the variable-stiffness arms, the system can operate with both grasping and cutting tool close to the ground. Multiple 3D vision cameras are used to track the cauliflowers in real-time, and to attempt to assess the maturity. Early experiments with the platform in the laboratory highlight the potential and challenges of the platform.
Abstract:The teleoperation of robot manipulators over the internet suffers from variable delays in the communications. Here we address a tele-assistance scenario, where a remote operator assists a disabled or elderly user on daily life tasks. Our behavioral approach uses local environment information from robot sensing to help enable faster execution for a given movement tolerance. This is achieved through a controller that automatically slows the operator down before having collisions, using a set of distributed proximity sensors. The controller is made to gradually increase the assistance in situations similar to those where ollisions have occurred in the past, thus adapting to the given operator, robot and task-set. Two controlled virtual experiments for tele-assistance with a 5 DOF manipulator were performed, with 300 ms and 600 ms mean variable round-trip delays. The results showed significant improvements in the median times of 12.6% and 16.5%, respectively. Improvements in the subjective workload were also seen with the controller. A first implementation on a physical robot manipulator is described.
• IEEE ROBOTICS & AUTOMATION MAGAZINE • 137 M uch work in robotics aimed at real-world applications falls in the large segment between teleoperated and fully autonomous systems. Such systems are characterized by the close coupling between the human operator and the robot, in principle, allowing the agents to share their particular sensing, adaptation, and decision-making capabilities. Replicable experiments can advance the state of the art of such systems but pose practical and epistemological challenges. For example, the trajectory of the system is governed by the adaptation both in the human and the robot agent. What do we need besides (or instead of) data sets for such a system? The degree of similarity between comparable experiments and the exact meaning of replication need to be clarified. Here, we explore replication of a distributed and adaptive shared control for an assistive robot manipulator. We attempt a methodological approach for reporting two virtual human experiments on the system: modeling the complete human-robot binomial, deriving closedloop performance metrics from the models, and openly publishing the results and experiment implementations. Replication and Human-Robot Systems We may think of theoretical/concept papers, proof-of-concept papers, and experimental papers as steps in a research idea life cycle. We believe that more papers of the experimental kind would greatly help the research activities in robotics and the industrial exploitation of the results. This is
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.