2010
DOI: 10.1007/s10055-010-0183-5
|View full text |Cite
|
Sign up to set email alerts
|

An augmented reality interface to contextual information

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

1
28
0
2

Year Published

2011
2011
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 66 publications
(31 citation statements)
references
References 42 publications
1
28
0
2
Order By: Relevance
“…Vision-based techniques, based on image analysis and pattern recognition, are usually slower and too heavy to be directly handled in the device. When applied to IoT, these systems should handle a broad database of objects to automatically recognize real instances [7], Sensor-based strategies rely on GPS and inertial mobile technology when available. This approach automates service triggering without user collaboration, but it does not work properly when indoors (GPS signal is not available and inertial technology are affected by, e.g., metallic furniture).…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations
“…Vision-based techniques, based on image analysis and pattern recognition, are usually slower and too heavy to be directly handled in the device. When applied to IoT, these systems should handle a broad database of objects to automatically recognize real instances [7], Sensor-based strategies rely on GPS and inertial mobile technology when available. This approach automates service triggering without user collaboration, but it does not work properly when indoors (GPS signal is not available and inertial technology are affected by, e.g., metallic furniture).…”
Section: Introductionmentioning
confidence: 99%
“…This work is further extended in [11], where historical records of user-device interaction are analysed in order to enhance the recommendation; a genetic algorithm is used in order to adapt the recommendation process to the user's circumstances. Equipped with special gadgets, the researches of [7] are able to determine where the user is looking at, exploiting this 'gaze tracking system' for enhancing the recommendation process. Of course, user's http://www.wikitude.com (present and past) location can be also used as an input factor for recommendation [2][5] [7].…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…They also use gesture-based interaction to enable touch sensing and browsing through the document stack. Finally, the (Ajanki et al 2011) paper explores how to develop context-sensitive interaction techniques. Using position and gaze sensing, their system can automatically infer user's interest in people and topics, and display AR content reflecting their interest and so support implicit rather than explicit interaction.…”
mentioning
confidence: 99%