Haptic interaction between two humans, for example, a physiotherapist assisting a patient regaining the ability to grasp a cup, likely facilitates motor skill acquisition. Haptic human–human interaction has been shown to enhance individual performance improvement in a tracking task with a visuomotor rotation perturbation. These results are remarkable given that haptically assisting or guiding an individual rarely benefits their individual improvement when the assistance is removed. We, therefore, replicated a study that reported that haptic interaction between humans was beneficial for individual improvement for tracking a target in a visuomotor rotation perturbation. In addition, we tested the effect of more interaction time and a stronger haptic coupling between the partners on individual improvement in the same task. We found no benefits of haptic interaction on individual improvement compared to individuals who practised the task alone, independent of interaction time or interaction strength.
Parents can effortlessly assist their child to walk, but the mechanism behind such physical coordination is still unknown. Studies have suggested that physical coordination is achieved by interacting humans who update their movement or motion plan in response to the partner’s behaviour. Here, we tested rigidly coupled pairs in a joint reaching task to observe such changes in the partners’ motion plans. However, the joint reaching movements were surprisingly consistent across different trials. A computational model that we developed demonstrated that the two partners had a distinct motion plan, which did not change with time. These results suggest that rigidly coupled pairs accomplish joint reaching movements by relying on a pre-programmed motion plan that is independent of the partner’s behaviour.
Humans have a natural ability to haptically interact with other humans, for instance during physically assisting a child to learn how to ride a bicycle. A recent study has shown that haptic humanhuman interaction can improve individual motor performance and motor learning rate while learning to track a continuously moving target with a visuomotor rotation. In this work we investigated whether these benefits of haptic interaction on motor learning generalize to a task in which the interacting partners track a target while they learn novel dynamics, represented by a force field. Pairs performed the tracking task and were intermittently connected to each other through a virtual spring. Motor learning was assessed by comparing each partner's individual performance during trials in which they were not connected to the performance of participants who learned the task alone. We found that haptic interaction through a compliant spring does not lead to improved individual motor performance or an increase in motor learning rate. Performance during interaction was significantly better than when the partners were not interacting, even when connected to a worse partner.
A driving simulation study assessed the impact of vocally entering an alpha numeric destination into Google Glass relative to voice and touch-entry methods using a handheld Samsung Galaxy S4 smartphone. Driving performance (standard deviation of lateral lane position and longitudinal velocity) and reaction to a light detection response task (DRT) were recorded for a gender-balanced sample of 24 young adult drivers. Task completion time and subjective workload ratings were also measured. Using Google Glass for destination entry had a statistically higher miss rate than using the Samsung Galaxy S4 voice interface, the Google Glass method took less time to complete, and the two methods were given comparable workload ratings by participants. In agreement with previous work, both voice interfaces performed significantly better than touch entry; this was seen in workload ratings, task duration, lateral lane control, and DRT metrics. Finally, irrespective of device or modality, destination entry significantly decreased responsiveness to events in the forward scene (as measured by the DRT reaction time) as compared to the baseline driving.
Robotic assistive devices show potential to aid hand function using surface electromyography (sEMG) as a control signal. Current implementations of these robotic systems typically do not include interaction with the environment, which naturally occurs during functional tasks. Further, many applications have experts place the sEMG sensors on specific muscles, which benefits precision alignment that may not be possible by non-experts. This study informs algorithm development for controlling assistive devices for grasping and releasing objects using kinematics and non-specifically placed sEMG sensors. Significant effects of object type were found in the grip aperture and joint kinematics. Muscle activity was significantly affected by small alignment changes in the sensor placement, yet the features analyzed showed anticipatory mechanisms prior to grasp and release. The appropriate inclusion of placement variability within a control architecture can be coupled with the kinematics and sEMG features to inform object type and anticipate grasp and release.
With the increased use of Unmanned Aerial Vehicles (UAVs), it is envisioned that UAV operators will become high level mission supervisors, responsible for information management and task planning. In the context of search missions, operators supervising a large number of UAVs can become overwhelmed with the sheer amount of information collected by the UAVs, making it difficult to optimize the information collection or direct their attention to the relevant data. Novel decisionsupport methods that account for realistic operator performance will therefore be required to aid the operators. This paper considers a decision support formulation for sequential search tasks, and discusses a non-preemptive scheduling formulation for a single operator performing a search mission in a time-constrained environment. The formulation is then generalized to include operator performance obtained from previous human-in-the-loop experiments, and presents one of the principal contributions of the paper. The sensitivity of the proposed model is analyzed in the presence of uncertainty to the operator model and search times, and a comparison is made between the expected performance difference between this scheduling system and a greedy scheduling strategy representative of operator planning. The paper concludes with the design of a human-in-the-loop experiment for a scheduling, replanning task for a simulated UAV mission.
How can humans remain in control of artificial intelligence (AI)-based systems designed to perform tasks autonomously? Such systems are increasingly ubiquitous, creating benefits - but also undesirable situations where moral responsibility for their actions cannot be properly attributed to any particular person or group. The concept of meaningful human control has been proposed to address responsibility gaps and mitigate them by establishing conditions that enable a proper attribution of responsibility for humans; however, clear requirements for researchers, designers, and engineers are yet inexistent, making the development of AI-based systems that remain under meaningful human control challenging. In this paper, we address the gap between philosophical theory and engineering practice by identifying, through an iterative process of abductive thinking, four actionable properties for AI-based systems under meaningful human control, which we discuss making use of two applications scenarios: automated vehicles and AI-based hiring. First, a system in which humans and AI algorithms interact should have an explicitly defined domain of morally loaded situations within which the system ought to operate. Second, humans and AI agents within the system should have appropriate and mutually compatible representations. Third, responsibility attributed to a human should be commensurate with that human’s ability and authority to control the system. Fourth, there should be explicit links between the actions of the AI agents and actions of humans who are aware of their moral responsibility. We argue that these four properties will support practically minded professionals to take concrete steps toward designing and engineering for AI systems that facilitate meaningful human control.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.