Many tasks in robot-assisted surgeries (RAS) can be represented by finite-state machines (FSMs), where each state represents either an action (such as picking up a needle) or an observation (such as bleeding). A crucial step towards the automation of such surgical tasks is the temporal perception of the current surgical scene, which requires a real-time estimation of the states in the FSMs. The objective of this work is to estimate the current state of the surgical task based on the actions performed or events occurred as the task progresses. We propose Fusion-KVE, a unified surgical state estimation model that incorporates multiple data sources including the Kinematics, Vision, and system Events. Additionally, we examine the strengths and weaknesses of different state estimation models in segmenting states with different representative features or levels of granularity. We evaluate our model on the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS), as well as a more complex dataset involving robotic intra-operative ultrasound (RIOUS) imaging, created using the da Vinci R Xi surgical system. Our model achieves a superior frame-wise state estimation accuracy up to 89.4%, which improves the state-of-the-art surgical state estimation models in both JIGSAWS suturing dataset and our RIOUS dataset.
The goal of this study was to analyze the human ability of external force discrimination while actively moving the arm. With the approach presented here, we give an overview for the whole arm of the just-noticeable differences (JNDs) for controlled movements separately executed for the wrist, elbow, and shoulder joints. The work was originally motivated in the design phase of the actuation system of a wearable exoskeleton, which is used in a teleoperation scenario where force feedback should be provided to the subject. The amount of this force feedback has to be calibrated according to the human force discrimination abilities. In the experiments presented here, 10 subjects performed a series of movements facing an opposing force from a commercial haptic interface. Force changes had to be detected in a two-alternative forced choice task. For each of the three joints tested, perceptual thresholds were measured as absolute thresholds (no reference force) and three JNDs corresponding to three reference forces chosen. For this, we used the outcome of the QUEST procedure after 70 trials. Using these four measurements we computed the Weber fraction. Our results demonstrate that different Weber fractions can be measured with respect to the joint. These were 0.11, 0.13, and 0.08 for wrist, elbow, and shoulder, respectively. It is discussed that force perception may be affected by the number of muscles involved and the reproducibility of the movement itself. The minimum perceivable force, on average, was 0.04 N for all three joints.
Abstract-Next generation industrial plants will feature mobile robots (e.g., autonomous forklifts) moving side by side with humans. In these scenarios, robots must not only maximize efficiency, but must also mitigate risks. In this paper we study the problem of risk-aware path planning, i.e., the problem of computing shortest paths in stochastic environments while ensuring that average risk is bounded. Our method is based on the framework of constrained Markov Decision Processes (CMDP). To counterbalance the intrinsic computational complexity of CMDPs, we propose a hierarchical method that is suboptimal but obtains significant speedups. Simulation results in factory-like environments illustrate how the hierarchical method compares with the non hierarchical one.
Many tasks in robot-assisted surgeries (RAS) can be represented by finite-state machines (FSMs), where each state represents either an action (such as picking up a needle) or an observation (such as bleeding). A crucial step towards the automation of such surgical tasks is the temporal perception of the current surgical scene, which requires a real-time estimation of the states in the FSMs. The objective of this work is to estimate the current state of the surgical task based on the actions performed or events occurred as the task progresses. We propose Fusion-KVE, a unified surgical state estimation model that incorporates multiple data sources including the Kinematics, Vision, and system Events. Additionally, we examine the strengths and weaknesses of different state estimation models in segmenting states with different representative features or levels of granularity. We evaluate our model on the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS), as well as a more complex dataset involving robotic intra-operative ultrasound (RIOUS) imaging, created using the da Vinci R Xi surgical system. Our model achieves a superior frame-wise state estimation accuracy up to 89.4%, which improves the state-of-the-art surgical state estimation models in both JIGSAWS suturing dataset and our RIOUS dataset.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.