2009
DOI: 10.1007/s00779-009-0249-0
|View full text |Cite
|
Sign up to set email alerts
|

Interaction with large ubiquitous displays using camera-equipped mobile phones

Abstract: In the ubiquitous computing environment, people will interact with everyday objects (or computers embedded in them) in ways different from the usual and familiar desktop user interface. One such typical situation is interacting with applications through large displays such as televisions, mirror displays, and public kiosks. With these applications, the use of the usual keyboard and mouse input is not usually viable (for practical reasons). In this setting, the mobile phone has emerged as an excellent device fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2011
2011
2019
2019

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(14 citation statements)
references
References 23 publications
0
14
0
Order By: Relevance
“…The works of Jeon et al [16] and Roman et al [21] on interaction methods scalable for multi-user multi-display systems are good starting points. As also mentioned, the accompanying sensors of the mobile devices could be used for hybrid tracking with improved accuracy.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The works of Jeon et al [16] and Roman et al [21] on interaction methods scalable for multi-user multi-display systems are good starting points. As also mentioned, the accompanying sensors of the mobile devices could be used for hybrid tracking with improved accuracy.…”
Section: Discussionmentioning
confidence: 99%
“…In an earlier work, Slay et al [25] apply artificial markers to interact with a virtual scene in a very intuitive way, by showing special markers for the camera as commands. Jeon et al [16] implemented different interaction scenarios for a group of people at a large display using camera-equipped mobile phones and marker-tagged objects on the screen. In both papers, authors use the ARToolKit library [2] for fiducial marker detection, which is a predecessor of the augmented reality framework applied in our system.…”
Section: Related Workmentioning
confidence: 99%
“…Mobile camera-based approaches that recognize screen content are of special relevance because they allow for direct absolute interaction with the remote system, which was proven to be superior over relative approaches in terms of task completion times (Baldauf, Fröhlich, Buchta, & Stürmer, 2013). Early work exploited visual markers for facilitating the camera-based detection of selecting screen objects (Ballagas, Rohs, & Sheridan, 2005;Jeon, Hwang, Kim, & Billinghurst, 2010;Pears, Jackson, & Olivier, 2009). Boring et al (2010) introduced the idea of markerless magic lens interaction and presented a mobile prototype for touch interaction with multidisplay environments.…”
Section: Mobile Screen Interactionmentioning
confidence: 99%
“…For example, it may be beneficial to explore the use of on-screen chording (Davidson and Han, 2006) multi-touch input (Steinicke et al, 2008), or gestures (Vogel and Balakrishnan, 2005) to enable more powerful 3D interactions. Similarly, accelerometers may be used to enable motion gestures (Jeon et al, 2010) through the phone, or to enable interactions through a phone's rotations orthogonal to the display plane (roll). We believe that in building on the base interactions established in our study, there remains a continuing need to explore serendipitous interactions with public displays, and that the smartphone provides a rich, ubiquitous, and adaptive platform from which these interactions can be enabled.…”
Section: Raycasting With a Touchscreenmentioning
confidence: 99%