2005
DOI: 10.1007/11431879_15
|View full text |Cite
|
Sign up to set email alerts
|

Spatial Control of Interactive Surfaces in an Augmented Environment

Abstract: Abstract. New display technologies will enable designers to use every surface as a support for interaction with information technology. In this article, we describe techniques and tools for enabling efficient man-machine interaction in computer augmented multi-surface environments. We focus on explicit interaction, in which the user decides when and where to interact with the system. We present three interaction techniques using simple actuators: fingers, a laser pointer, and a rectangular piece of cardboard. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2006
2006
2017
2017

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(12 citation statements)
references
References 19 publications
(17 reference statements)
0
12
0
Order By: Relevance
“…In [3] we presented an appearance-based implementation of touch sensitive projected buttons which we called "Sensitive widgets". The presence of an object over a button on the interaction surface is detected by observing the change of perceived luminance over the button center area with respect to a reference area.…”
Section: Simple Pattern Occlusion Detectorsmentioning
confidence: 99%
See 1 more Smart Citation
“…In [3] we presented an appearance-based implementation of touch sensitive projected buttons which we called "Sensitive widgets". The presence of an object over a button on the interaction surface is detected by observing the change of perceived luminance over the button center area with respect to a reference area.…”
Section: Simple Pattern Occlusion Detectorsmentioning
confidence: 99%
“…Because the calibration service needs information from both the camera and the application, two approaches are possible: (a) If it controls the graphical output, it can work without interaction with the application. This is the case in the PDS example [3], where the interactive surface itself is tracked by the service. (b) In the general case, it must negotiate with the client application the display of a calibration grid [7].…”
Section: Support Servicesmentioning
confidence: 99%
“…This can be overcome somewhat by using multiple projectors and cameras, at the expense of the overall complexity of the system [35]. A handful of research prototypes have explored using a motorized platform to reorient a single projector and camera to view arbitrary locations throughout a room [22,8,6,11]. Such steerable display systems trade the shortcomings of the multiple projector and camera approach for the problem of selecting the most appropriate orientation of the projector and camera at each moment.…”
Section: Introductionmentioning
confidence: 99%
“…Users are detected using vision or using the mobile devices they carry, for instance Bluetooth enabled cell phones, PDAs with WiFi cards, RF-Id tags built in their professional badges, etc. Such an environment should also provide numerous human-machine interfaces, as for example screens and touchscreens, microphones for speech recognition, speakers for speech synthesis, keyboards and mice or more sophisticated input-output devices (for instance [2]). All these devices, by definition of ubiquitous, are spread in the environment.…”
Section: Introductionmentioning
confidence: 99%