2013 12th International Conference on Document Analysis and Recognition 2013
DOI: 10.1109/icdar.2013.15
|View full text |Cite
|
Sign up to set email alerts
|

Wearable Reading Assist System: Augmented Reality Document Combining Document Retrieval and Eye Tracking

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
5
3

Relationship

3
5

Authors

Journals

citations
Cited by 23 publications
(10 citation statements)
references
References 10 publications
0
10
0
Order By: Relevance
“…Previous work shows that activation of commands using dwell-time-based approaches are useful, and can even detect fixations with noisy data [26]. We implement this function by detecting attention to icons on the display.…”
Section: Eye-conmentioning
confidence: 97%
See 2 more Smart Citations
“…Previous work shows that activation of commands using dwell-time-based approaches are useful, and can even detect fixations with noisy data [26]. We implement this function by detecting attention to icons on the display.…”
Section: Eye-conmentioning
confidence: 97%
“…Recent work by Coco et al [8] and Henderson et al [10] also showed that further classification of cognitive state such as scene memorization and visual search is possible using eye gaze patterns. Such analysis on user's cognitive state could be used to enhance user experiences when he or she is interacting with a computer screen [20,5] or with real document printouts [26], inferring in which type of cognitive state the user is involved.…”
Section: Prior Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Although focus has previously been proposed for interaction with text such as the system proposed by Toyama et al, research up until now lacks interaction methods for multi-focal plane HMDs [22]. Despite the appearance of several multi-focal or vari-focal HMDs, studies with those displays are limited to depth perception and have yet to take advantage of focal depth via eye tracking.…”
Section: Gaze Based Interaction In Augmented Realitymentioning
confidence: 99%
“…This method, called LLAH (Locally Likely Arrangement Hashing) is robust to perspective distortion of an image and scale-invariant. The method has been used by [11]. An overview of the document retrieval method is shown in Figure 3.…”
Section: Document Retrievalmentioning
confidence: 99%