Sensors, processors, and radios can be integrated invisibly into objects to make them smart and sensitive to user interaction, but feedback is often limited to beeps, blinks, or buzzes. We propose to redress this input-output imbalance by augmentation of smart objects with projected displays, that-unlike physical displays-allow seamless integration with the natural appearance of an object. In this article, we investigate how, in a ubiquitous computing world, smart objects can acquire and control a projection. We consider that projectors and cameras are ubiquitous in the environment, and we develop a novel conception and system that enables smart objects to spontaneously associate with projector-camera systems for cooperative augmentation. Projector-camera systems are conceived as generic, supporting standard computer vision methods for different appearance cues, and smart objects provide a model of their appearance for method selection at runtime, as well as sensor observations to constrain the visual detection process. Cooperative detection results in accurate location and pose of the object, which is then tracked for visual augmentation in response to display requests by the smart object. In this article, we define the conceptual framework underlying our approach; report on computer vision experiments that give original insight into natural appearance-based detection of everyday objects; show how object sensing can be used to increase speed and robustness of visual detection; describe and evaluate a fully implemented system; and describe two smart object applications to illustrate the system's cooperative augmentation process and the embodied interactions it enables with smart objects.
ACM Reference Format:Molyneaux, D., Gellersen, H., and Finney, J. 2013. Cooperative augmentation of mobile smart objects with projected displays.