Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems 2023
DOI: 10.1145/3544548.3580685
|View full text |Cite
|
Sign up to set email alerts
|

Vergence Matching: Inferring Attention to Objects in 3D Environments for Gaze-Assisted Selection

Abstract: Figure 1: We present Vergence Matching, an interaction technique which uses the principle of motion correlation for selection of small targets in 3D environments. To select a target, smooth depth changes are induced perpendicular to the user: (a) when the target moves closer, the eyes move inwards increasing the vergence angle (convergence), (b) vice versa the vergence angle decreases (divergence) when the target moves away from the user. The relative vergence movement of the eyes are then correlated with the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(4 citation statements)
references
References 66 publications
0
4
0
Order By: Relevance
“…If the vergence movement corresponds to a specific target's depth alteration, that target is identified as the user's point of focus. Though this approach has proven effective in selecting small targets [106], it is often perceived as uncomfortable, difficult to execute consistently, and diverts the user's attention [56].…”
Section: Gaze-based Interactionsmentioning
confidence: 99%
“…If the vergence movement corresponds to a specific target's depth alteration, that target is identified as the user's point of focus. Though this approach has proven effective in selecting small targets [106], it is often perceived as uncomfortable, difficult to execute consistently, and diverts the user's attention [56].…”
Section: Gaze-based Interactionsmentioning
confidence: 99%
“…Previous works have shown the feasibility of visual depth estimations in the VR headsets by using binocular disparity and stereoscopic vergence in VR headset [10,40]. The concept of visual depth as a new interaction input has explored in the previous works, either by defining a semi-transparent window at a different focal depth [26,41,42], tracking voluntary vergence movements [3,13], or matching the vergence changes with the depth changes of a moving objects [34] in VR. However, all of these methods lack an end-to-end UI design to guide the users how to actively manipulate their visual depth.…”
Section: Gaze-based Vr Interactionmentioning
confidence: 99%
“…More recent works on gaze-based VR/AR interaction demonstrate the great potential of visual depth as an interaction input to solve the Midas touch problem. These methods either guide the user to look at physical or virtual objects at different depths [3,26,34,41,42] or rely on voluntary eye convergence and divergence [13,15] by asking the users focusing on the nose or imagining to fixate on some point behind the display plane. However, these works lack an intuitive and systematic User Interface (UI) design to guide the users in manipulating their visual depth, leading to limited application scenarios and potentially user frustration.…”
Section: Introductionmentioning
confidence: 99%
“…1,13,42], and selection and disambiguation techniques that do not rely on calibration [e.g. 26,32,33,45].…”
Section: Related Workmentioning
confidence: 99%