2018
DOI: 10.1145/3232163
|View full text |Cite
|
Sign up to set email alerts
|

Fast and Precise Touch-Based Text Entry for Head-Mounted Augmented Reality with Variable Occlusion

Abstract: We present the VISAR keyboard: an augmented reality (AR) head-mounted display (HMD) system that supports text entry via a virtualised input surface. Users select keys on the virtual keyboard by imitating the process of single-hand typing on a physical touchscreen display. Our system uses a statistical decoder to infer users' intended text and to provide error-tolerant predictions. There is also a high-precision fall-back mechanism to support users in indicating which keys should be unmodified by the auto-corre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
25
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
4
2

Relationship

2
8

Authors

Journals

citations
Cited by 50 publications
(27 citation statements)
references
References 38 publications
0
25
0
Order By: Relevance
“…Participants achieved 29 wpm after one hour of practice although stimulus phrases were selected to ensure only words in the known vocabulary were included. VISAR [4] also leverages probabilistic decoding, in an approach derived from Vertanen et al [31], and demonstrates single-finger mid-air text input specifically tailored for AR HMDs. After various refinements, including the provision of error-tolerant word predictions, the touch-based approach yielded a mean entry rate of 17.8 wpm.…”
Section: Related Workmentioning
confidence: 99%
“…Participants achieved 29 wpm after one hour of practice although stimulus phrases were selected to ensure only words in the known vocabulary were included. VISAR [4] also leverages probabilistic decoding, in an approach derived from Vertanen et al [31], and demonstrates single-finger mid-air text input specifically tailored for AR HMDs. After various refinements, including the provision of error-tolerant word predictions, the touch-based approach yielded a mean entry rate of 17.8 wpm.…”
Section: Related Workmentioning
confidence: 99%
“…Gesture-based interaction tracks the position and orientation of the user's fingers or hands using the camera or inertial measurement unit (IMU) of the AR smart glasses and supports interactions with virtual objects in the AR environments [21][22][23][24][25][26][27][28][29][30]. Ha et al [21] proposed WeARHand, which allows the user to manipulate virtual 3D objects with a bare hand in a wearable AR environment.…”
Section: Gesture Interactionsmentioning
confidence: 99%
“…Users can often anticipate and alter their input behavior to avoid auto-correct errors, e.g. by force (Weir et al, 2014), by long pressing a key (Vertanen et al, 2019), or by switching to a precise input mode (Dudley et al, 2018). Similarly, our abbreviated input method needs a way to specify words that should not be expanded or auto-corrected.…”
Section: Discussionmentioning
confidence: 99%