Video projectors have typically been used to display images on surfaces whose geometric relationship to the projector remains constant, wuch as walls or pre-calibrated surfaces. In this paper, we present a technique for projecting content onto moveable surfaces that adapts to the motion and location of the surface to simulate an active display. This is accomplished using a projector based location tracking technique. We use light sensors embedded into the moveable surface and project low-perceptibility Gray-coded patterns to first discover the sensor locations, and then imcrementally track them at interactive rates. We describe how to reduce the perceptibility of tracking patterns, achieve interactive tracking rates, use motion modeling to improve tracaking performance, and respond to sensor occlusions. A group of tracked sensors can define quadrangles for simulating moveable displays while single sensors can be used as control inputs. By unifying the tracking and display technology into a single mechanism, we can substantially reduce the cost and complexity of implementing applications that combine motion tracking and projected imagery. ACM Symposium on User Interface Software & Technology (UIST)This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Video projectors have typically been used to display images on surfaces whose geometric relationship to the projector remains constant, such as walls or pre-calibrated surfaces.In this paper, we present a technique for projecting content onto moveable surfaces that adapts to the motion and location of the surface to simulate an active display. This is accomplished using a projector based location tracking techinque. We use light sensors embedded into the moveable surface and project low-perceptibility Graycoded patterns to first discover the sensor locations, and then incrementally track them at interactive rates. We describe how to reduce the perceptibility of tracking patterns, achieve interactive tracking rates, use motion modeling to improve tracking performance, and respond to sensor occlusions. A group of tracked sensors can define quadrangles for simulating moveable displays while single sensors can be used as control inputs. By unifying the tracking and display technology into a single mechanism, we can substantially reduce the cost and complexity of implementing applications that combine motion tracking and projected imagery.
A s a field, computer science faces a problem. From 2000 to 2004, the percentage of first-year undergraduates planning to major in CS declined by more than 60 percent (see the "Declining Interest in Computer Science" sidebar). 1 To attract more students, the introductory CS curriculum must be motivating and relevant. CS courses that are set in a motivating context (for example, using multimedia, gaming, or robotics) can excite students and get them hooked. Other researchers have worked on introductory programming classes with robots as well as introduction to robotics classes (http://myro. roboteducation.org/robobiblio). We didn't want to create a robotics course but rather an introductory CS course based on robots. Introduced properly, robots make visible and tangible those aspects of CS that are often hidden behind computer screens and in computer memory. To further this goal, we formed the Institute for Personal Robots in Education (IPRE), a joint effort between Georgia Tech and Bryn Mawr College and sponsored by Microsoft Research (www.roboteducation. org). This article discusses the first-year results of a three-year project.
With the ubiquity of camera phones, it is now possible to capture digital still and moving images anywhere, raising a legitimate concern for many organizations and individuals. Although legal and social boundaries can curb the capture of sensitive information, it sometimes is neither practical nor desirable to follow the option of confiscating the capture device from an individual. We present the design and proof of concept implementation of a capture-resistant environment that prevents the recording of still and moving images without requiring any cooperation on the part of the capturing device or its operator. Our solution involves a tracking system that uses computer vision for locating any number of retro-reflective CCD or CMOS camera sensors in a protected area. A pulsing light is then directed at the lens, distorting any imagery the camera records. Although the directed light interferes with the camera's operation, it can be designed to minimally impact the sight of other humans in the environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.