In this paper, we propose a novel computer vision technique to measure respiration rate by counting the periodic thoracoabdominal motion in real-time using an inexpensive consumer grade camera. We compute the component of optical flow parallel to the image gradient at each pixel, which is a computationally inexpensive operation. Then, we find a principal flow field by gathering information over many frames. Subsequently, in each frame, we compute the component of flow along this principal flow field to capture the thoracoabdominal motion. Our method is very simple, easy to implement and needs no specialized hardware. This method is computationally very efficient and can be easily implemented in mobile devices. We demonstrate the efficacy of our method on real world datasets and compare the results with those obtained using impedance pneumography.
Human demonstrations are important in a range of robotics applications, and are created with a variety of input methods. However, the design space for these input methods has not been extensively studied. In this paper, focusing on demonstrations of hand-scale object manipulation tasks to robot arms with two-finger grippers, we identify distinct usage paradigms in robotics that utilize human-to-robot demonstrations, extract abstract features that form a design space for input methods, and characterize existing input methods as well as a novel input method that we introduce, the instrumented tongs. We detail the design specifications for our method and present a user study that compares it against three common input methods: free-hand manipulation, kinesthetic guidance, and teleoperation. Study results show that instrumented tongs provide high quality demonstrations and a positive experience for the demonstrator while offering good correspondence to the target robot.
Figure 1: In robot teleoperation, we propose that a conflict between information from an operator's proprioceptive and visual senses (how much their hand moves versus how much the robot moves) is an effective cue for communicating weight of objects in a remote environment. A. The robot movement and operator hand movement are approximately the same when the object is light. B. The operator has to move their hand by a greater distance when the object is heavy, resulting in a visuo-proprioceptive weight cue. C. We demonstrate the feasibility of using such a cue in four tasks to enhance user performance and experience.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.