Abstract:Human gait, as a soft biometric, helps to recognize people through their walking. To further improve the recognition performance, we propose a novel video sensor-based gait representation, DeepGait, using deep convolutional features and introduce Joint Bayesian to model view variance. DeepGait is generated by using a pre-trained "very deep" network "D-Net" (VGG-D) without any fine-tuning. For non-view setting, DeepGait outperforms hand-crafted representations (e.g., Gait Energy Image, Frequency-Domain Feature and Gait Flow Image, etc.). Furthermore, for cross-view setting, 256-dimensional DeepGait after PCA significantly outperforms the state-of-the-art methods on the OU-ISR large population (OULP) dataset. The OULP dataset, which includes 4007 subjects, makes our result reliable in a statistically reliable way.
We present a new technique for human-robot interaction called robot expressionism through cartooning. We suggest that robots utilise cartoon-art techniques such as simplified and exaggerated facial expressions, stylised text, and icons for intuitive social interaction with humans. We discuss practical mixed reality solutions that allow robots to augment themselves or their surroundings with cartoon art content. Our effort is part of what we call robot expressionism, a conceptual approach to the design and analysis of robotic interfaces that focuses on providing intuitive insight into a robotic state as well as artistic quality of interaction. Our paper discusses a variety of ways that allow robots to express cartoon art, and details a test bed design, implementation, and preliminary evaluation. We describe our test bed, Jeeves, which uses a Roomba, an iRobot vacuum cleaner robot, and a mixed-reality system as a platform for rapid prototyping of cartoon-art interfaces. Finally, we present a set of interaction content scenarios which use the Jeeves prototype: trash roomba, the recycle police, and clean tracks, as well as initial user evaluation of our approach.
This paper presents an experimental test bed for exploring and evaluating human-robot interaction (HRI). Our system is designed around the concept of playing board games involving collaboration between humans and robots in a shared physical environment. Unlike the classic human-versusmachine situation often established in computer-based board games, our test bed takes advantage of the rich interaction opportunities that arise when humans and robots play collaboratively as a team. To facilitate interaction within a shared physical environment, our game is played on a large checkerboard where human and robotic players can be situated and play as game pieces. With meaningful interaction occurring within this controlled setup, various aspects of human-robot interaction can be easily explored and evaluated such as interaction methods and robot behaviour. In this paper we present our test bed which uses a telepresence interface for playing the game and the results of a user study demonstrating the sensitivity of our system in assessing the effect of different robot behaviours on users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.