BackgroundPhysical interactions between two people are ubiquitous in our daily lives, and an integral part of many forms of rehabilitation. However, few studies have investigated forces arising from physical interactions between humans during a cooperative motor task, particularly during overground movements. As such, the direction and magnitude of interaction forces between two human partners, how those forces are used to communicate movement goals, and whether they change with motor experience remains unknown. A better understanding of how cooperative physical interactions are achieved in healthy individuals of different skill levels is a first step toward understanding principles of physical interactions that could be applied to robotic devices for motor assistance and rehabilitation.MethodsInteraction forces between expert and novice partner dancers were recorded while performing a forward-backward partnered stepping task with assigned “leader” and “follower” roles. Their position was recorded using motion capture. The magnitude and direction of the interaction forces were analyzed and compared across groups (i.e. expert-expert, expert-novice, and novice-novice) and across movement phases (i.e. forward, backward, change of direction).ResultsAll dyads were able to perform the partnered stepping task with some level of proficiency. Relatively small interaction forces (10–30N) were observed across all dyads, but were significantly larger among expert-expert dyads. Interaction forces were also found to be significantly different across movement phases. However, interaction force magnitude did not change as whole-body synchronization between partners improved across trials.ConclusionsRelatively small interaction forces may communicate movement goals (i.e. “what to do and when to do it”) between human partners during cooperative physical interactions. Moreover, these small interactions forces vary with prior motor experience, and may act primarily as guiding cues that convey information about movement goals rather than providing physical assistance. This suggests that robots may be able to provide meaningful physical interactions for rehabilitation using relatively small force levels.
Dressing is an important activity of daily living (ADL) with which many people require assistance due to impairments. Robots have the potential to provide dressing assistance, but physical interactions between clothing and the human body can be complex and difficult to visually observe. We provide evidence that data-driven haptic perception can be used to infer relationships between clothing and the human body during robot-assisted dressing. We conducted a carefully controlled experiment with 12 human participants during which a robot pulled a hospital gown along the length of each person's forearm 30 times. This representative task resulted in one of the following three outcomes: the hand missed the opening to the sleeve; the hand or forearm became caught on the sleeve; or the full forearm successfully entered the sleeve. We found that hidden Markov models (HMMs) using only forces measured at the robot's end effector classified these outcomes with high accuracy. The HMMs' performance generalized well to participants (98.61% accuracy) and velocities (98.61% accuracy) outside of the training data. They also performed well when we limited the force applied by the robot (95.8% accuracy with a 2N threshold), and could predict the outcome early in the process. Despite the lightweight hospital gown, HMMs that used forces in the direction of gravity substantially outperformed those that did not. The best performing HMMs used forces in the direction of motion and the direction of gravity.
Online detection of anomalous execution can be valuable for robot manipulation, enabling robots to operate more safely, determine when a behavior is inappropriate, and otherwise exhibit more common sense. By using multiple complementary sensory modalities, robots could potentially detect a wider variety of anomalies, such as anomalous contact or a loud utterance by a human. However, task variability and the potential for false positives make online anomaly detection challenging, especially for long-duration manipulation behaviors. In this paper, we provide evidence for the value of multimodal execution monitoring and the use of a detection threshold that varies based on the progress of execution. Using a data-driven approach, we train an execution monitor that runs in parallel to a manipulation behavior. Like previous methods for anomaly detection, our method trains a hidden Markov model (HMM) using multimodal observations from non-anomalous executions. In contrast to prior work, our system also uses a detection threshold that changes based on the execution progress. We evaluated our approach with haptic, visual, auditory, and kinematic sensing during a variety of manipulation tasks performed by a PR2 robot. The tasks included pushing doors closed, operating switches, and assisting ablebodied participants with eating yogurt. In our evaluations, our anomaly detection method performed substantially better with multimodal monitoring than single modality monitoring. It also resulted in more desirable ROC curves when compared with other detection threshold methods from the literature, obtaining higher true positive rates for comparable false positive rates.
In this paper, we demonstrate data-driven inference of mechanical properties of objects using a tactile sensor array (skin) covering a robot's forearm. We focus on the mobility (sliding vs. fixed), compliance (soft vs. hard), and identity of objects in the environment, as this information could be useful for efficient manipulation and search. By using the large surface area of the forearm, a robot could potentially search and map a cluttered volume more efficiently, and be informed by incidental contact during other manipulation tasks. Our approach tracks a contact region on the forearm over time in order to generate time series of select features, such as the maximum force, contact area, and contact motion. We then process and reduce the dimensionality of these time series to generate a feature vector to characterize the contact. Finally, we use the k-nearest neighbor algorithm (k-NN) to classify a new feature vector based on a set of previously collected feature vectors. Our results show a high cross-validation accuracy in both classification of mechanical properties and object recognition. In addition, we analyze the effect of taxel resolution, duration of observation, feature selection, and feature scaling on the classification accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.