2016
DOI: 10.1016/j.ijhcs.2015.10.004
|View full text |Cite
|
Sign up to set email alerts
|

Effects of 3D perspective on head gaze estimation with a multiview autostereoscopic display

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 14 publications
(4 citation statements)
references
References 38 publications
1
3
0
Order By: Relevance
“…Secondly, for the toad located at the center of the screen, the viewer B achieved the lowest mean error and the error increased symmetrically as the viewer position diverged from the central (see the middle column of figure 5, highlighted in pink). This parallels the previous findings [14,16,12]. By contrast, when characters placed at off-center locations, the level of error will increase as the character location diverges from the viewer location (see the leftmost or rightmost column of figure 5, highlighted in orange).…”
Section: Object-focused Gazesupporting
confidence: 89%
See 2 more Smart Citations
“…Secondly, for the toad located at the center of the screen, the viewer B achieved the lowest mean error and the error increased symmetrically as the viewer position diverged from the central (see the middle column of figure 5, highlighted in pink). This parallels the previous findings [14,16,12]. By contrast, when characters placed at off-center locations, the level of error will increase as the character location diverges from the viewer location (see the leftmost or rightmost column of figure 5, highlighted in orange).…”
Section: Object-focused Gazesupporting
confidence: 89%
“…The important finding of the original FTVR studies was a comparison of different visual cues. For a variety of 3D interactions, they found that while head-tracking and stereo cues together were best, head-tracking alone resulted in better performance compared to stereo cues alone [12,11]. This initial finding motivated many follow-on FTVR displays [6,13], including multi-view FTVR displays [14,15,16] or non-planar displays [17,18,19] or indeed mobile hand held displays [20,21], that omitted stereo display hardware and so did not require any headset or glasses.…”
Section: Fish Tank Virtual Realitymentioning
confidence: 99%
See 1 more Smart Citation
“…To achieve natural and consistent communication, understanding the perspective (Yang and Olson, 2002) (Tang and Fakourfar, 2017) of another user in a co-located environment is essential. Visual cues may be aligned less accurately when viewing from a single perspective that warps spatial characteristics (Pan and Steed, 2016) (Kim et al, 2020b). Moreover, spatial elements can provide depth cues in the shared task space, enabling collaborators to gain a better understanding of the same visualised information from different perspectives (Jing et al, 2019) and improving communication.…”
Section: Visualised Gaze Cues Vs Other Communication Cues In Co-located Collaborationmentioning
confidence: 99%