External Human-Machine Interfaces (eHMIs) are expected to bridge the communication gap between an automated vehicle (AV) and pedestrians to replace the missing driver-pedestrian interaction. However, the relative impact of movement-based implicit communication and explicit communication with the aid of eHMIs on pedestrians has not been studied and empirically evaluated. In this study, we pit messages from an eHMI against different driving behaviors of an AV that yields to a pedestrian to understand whether pedestrians tend to pay more attention to the motion dynamics of the car or the eHMI in making road-crossing decisions. Our contributions are twofold: we investigate (1) whether the presence of eHMIs has any objective effect on pedestrians’ understanding of the vehicle’s intent, and (2) how the movement dynamics of the vehicle affect the perception of the vehicle intent and interact with the impact of an eHMI. Results show that (1) eHMIs help in convincing pedestrians of the vehicle’s yielding intention, particularly when the speed of the vehicle is slow enough to not be an obvious threat, but still fast enough to raise a doubt about a vehicle’s stopping intention, and (2) pedestrians do not blindly trust the eHMI: when the eHMI message and the vehicle’s movement pattern contradict, pedestrians fall back to movement-based cues. Our results imply that when explicit communication (eHMI) and implicit communication (motion-dynamics and kinematics) are in alignment and work in tandem, communication of the AV’s yielding intention can be facilitated most effectively. This insight can be useful in designing the optimal interaction between AVs and pedestrians from a user-centered design perspective when driver-centric communication is not available.
Interactive workspaces combine horizontal and vertical touch surfaces into a single digital workspace. During an exploration of these systems, it was shown that direct interaction on the vertical surface is cumbersome and more inaccurate than on the horizontal one. To overcome these problems, indirect touch systems turn the horizontal touch surface into an input devices that allows manipulation of objects on the vertical display. If the horizontal touch surface also acts as a display, however, it becomes necessary to distinguish which screen is currently in use by providing a switching mode. We investigate the use of gaze tracking to perform these mode switches. In three user studies we compare absolute and relative gaze augmented selection techniques with the traditional direct-touch approach. Our results show that our relative gaze augmented selection technique outperforms the other techniques for simple tapping tasks alternating between horizontal and vertical surfaces, and for dragging on the vertical surface. However, when tasks involve dragging across surfaces, the findings are more complex. We provide a detailed description of the proposed interaction techniques, a statistical analysis of these interaction techniques, and how they can be applied to systems that involve a combination of multiple horizontal and vertical touch surfaces.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.