In this paper, we introduce the notion of a programmable imaging system. Such an imaging system provides a human user or a vision system significant control over the radiometric and geometric characteristics of the system. This flexibility is achieved using a programmable array of micro-mirrors. The orientations of the mirrors of the array can be controlled with high precision over space and time. This enables the system to select and modulate rays from the light field based on the needs of the application at hand.We have implemented a programmable imaging system that uses a digital micro-mirror device (DMD), which is used in digital light processing. Although the mirrors of this device can only be positioned in one of two states, we show that our system can be used to implement a wide variety of imaging functions, including, high dynamic range imaging, feature detection, and object recognition. We conclude with a discussion on how a micro-mirror array can be used to efficiently control field of view without the use of moving parts. A Flexible Approach to ImagingIn the past few decades, a wide variety of novel imaging systems have been proposed that have fundamentally changed the notion of a camera. These include high dynamic range, multispectral, omnidirectional, and multiviewpoint imaging systems. The hardware and software of each of these devices are designed to accomplish a particular imaging function and this function cannot be altered without significant redesign.In this paper, we introduce the notion of a programmable imaging system. Such a system gives a human user or a computer vision system significant control over the radiometric and geometric properties of the system. This flexibility is achieved by using a programmable array of micro-mirrors. The orientations of the mirrors of the array can be controlled with very high speed. This enables the system to select and modulate scene rays based on the needs of the application at hand. The end result is a single imaging system that can emulate the functionalities of several existing specialized systems as well as new ones. * This work was done at the Columbia Center for Vision and Graphics. It was supported by an ONR contract (N00014-03-1-0023). The basic principle behind the proposed approach is illustrated in Figure 1. The system observes the scene via a two-dimensional array of micro-mirrors, whose orientations can be controlled. The surface normal n i of the i th mirror determines the direction of the scene ray it reflects into the imaging system. If the normals of the mirrors can be arbitrarily chosen, each mirror can be programmed to select from a continuous cone of scene rays. In addition, each mirror can also be oriented with normal n b such that it reflects a black surface (with zero radiance). Let the integration time of the image detector be T . If the mirror is made to point in the directions n i and n b for durations t and T − t, respectively, the scene ray is attenuated by t/T . As a result, each imaged scene ray can also be radiometr...
Abstract. In this paper, we introduce the notion of a programmable imaging system. Such an imaging system provides a human user or a vision system significant control over the radiometric and geometric characteristics of the system. This flexibility is achieved using a programmable array of micro-mirrors. The orientations of the mirrors of the array can be controlled with high precision over space and time. This enables the system to select and modulate rays from the scene's light field based on the needs of the application at hand.We have implemented a programmable imaging system that uses a digital micro-mirror device (DMD), which is used in digital light processing. Although the mirrors of this device can only be positioned in one of two states, we show that our system can be used to implement a wide variety of imaging functions, including, high dynamic range imaging, feature detection, and object recognition. We also describe how a micro-mirror array that allows full control over the orientations of its mirrors can be used to instantly change the field of view and resolution characteristics of the imaging system. We conclude with a discussion on the implications of programmable imaging for computer vision.
AR-Mentor is a wearable real time Augmented Reality (AR) mentoring system that is configured to assist in maintenance and repair tasks of complex machinery, such as vehicles, appliances, and industrial machinery. The system combines a wearable Optical-See-Through (OST) display device with high precision 6-Degree-Of-Freedom (DOF) pose tracking and a virtual personal assistant (VPA) with natural language, verbal conversational interaction, providing guidance to the user in the form of visual, audio and locational cues. The system is designed to be heads-up and hands-free allowing the user to freely move about the maintenance or training environment and receive globally aligned and context aware visual and audio instructions (animations, symbolic icons, text, multimedia content, speech). The user can interact with the system, ask questions and get clarifications and specific guidance for the task at hand. A pilot application with AR-Mentor was successfully built to instruct a novice to perform an advanced 33-step maintenance task on a training vehicle. The initial live training tests demonstrate that AR-Mentor is able to help and serve as an assistant to an instructor, freeing him/her to cover more students and to focus on higher-order teaching.
In this paper we present an augmented reality binocular system to allow long range high precision augmentation of live telescopic imagery with aerial and terrain based synthetic objects, vehicles, people and effects. The inserted objects must appear stable in the display and must not jitter and drift as the user pans around and examines the scene with the binoculars. The design of the system is based on using two different cameras with wide field of view and narrow field of view lenses enclosed in a binocular shaped shell. Using the wide field of view gives us context and enables us to recover the 3D location and orientation of the binoculars much more robustly, whereas the narrow field of view is used for the actual augmentation as well as to increase precision in tracking. We present our navigation algorithm that uses the two cameras in combination with an inertial measurement unit and global positioning system in an extended Kalman filter and provides jitter free, robust and real-time pose estimation for precise augmentation. We have demonstrated successful use of our system as part of information sharing example as well as a live simulated training system for observer training, in which fixed and rotary wing aircrafts, ground vehicles, and weapon effects are combined with real world scenes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.