Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems 2020
DOI: 10.1145/3313831.3376578
|View full text |Cite
|
Sign up to set email alerts
|

GazeConduits: Calibration-Free Cross-Device Collaboration through Gaze and Touch

Abstract: We present GazeConduits, a calibration-free ad-hoc mobile interaction concept that enables users to collaboratively interact with tablets, other users, and content in a cross-device setting using gaze and touch input. GazeConduits leverages recently introduced smartphone capabilities to detect facial features and estimate users' gaze directions. To join a collaborative setting, users place one or more tablets onto a shared table and position their phone in the center, which then tracks users present as well as… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 16 publications
(9 citation statements)
references
References 51 publications
(66 reference statements)
0
5
0
Order By: Relevance
“…Being able to track all these input modalities without any external devices by only using an off-the-shelf smartphone would enable many powerful spatial interactions, discovered over decades of in-lab research, for everyone (e.g., head or gaze pointing, virtual-hand or peephole interactions, body-centered inputs). First steps were already made for using smartphone-based world-tracking in the domain of distant displays [2], handheld AR [49][50][51], head-mounted displays [52] as well as using face-tracking in the domain of cross-device [53] and on-phone interactions [54]. Using simultaneous world-and face-tracking on off-the-shelf smartphones, however, still remains unaddressed, since just recently the first examples of the technology were featured for handheld AR use-cases [55].…”
Section: Discussionmentioning
confidence: 99%
“…Being able to track all these input modalities without any external devices by only using an off-the-shelf smartphone would enable many powerful spatial interactions, discovered over decades of in-lab research, for everyone (e.g., head or gaze pointing, virtual-hand or peephole interactions, body-centered inputs). First steps were already made for using smartphone-based world-tracking in the domain of distant displays [2], handheld AR [49][50][51], head-mounted displays [52] as well as using face-tracking in the domain of cross-device [53] and on-phone interactions [54]. Using simultaneous world-and face-tracking on off-the-shelf smartphones, however, still remains unaddressed, since just recently the first examples of the technology were featured for handheld AR use-cases [55].…”
Section: Discussionmentioning
confidence: 99%
“…Moreover, to reduce the number of recalibrations in the experiment, we used a chin rest, which is not a practical solution in most cases. However, while calibration is a necessary part of conventional eye tracking technologies, approaches exist to enable calibration-free eye tracking [29][30][31][32][33], and we expect that this problem could be soon solved in commercial applications. In general, the combined gaze-voice technology necessarily inherits certain issues of the eye tracking-based technology, such as issues in eye pupil tracing in some users, but they could be partly or fully solved with the progress in eye tracking technology.…”
Section: Discussionmentioning
confidence: 99%
“…Rekimoto's seminal work [76] introduced pick-and-drop for transfer. Many other techniques followed using, for example, stitching [36], 3D hand motion [75], tapping and dragging [32], tilting and portals [66], redirecting content [108], eye-tracking [99], or conduit gestures [12] How often devices need to change position or configuration Clearboard [38] ImmersaDesk [16] Tilted Tabletops [69] BendDesk [101] CurveDesk [105] Stitching [36] HuddleLamp [75] HeadPhones [29] Micro-mobility [66] Connected slates [12] GazeConduits [99] Panelrama [108] Kirigami [28] Flux [58] ProxemicFurniture [27] HoverPad [83] Tilt Drafting Table [79] BoomChameleon [93,94] MetaDESK [96] ConnecTables [88] AdapTable [50] SurfaceConstellations [64] Codex [35] BEXHI [72] PickCells [24] TiltDisplays [1]…”
Section: Ad-hoc Interactions Across Mobile Devicesmentioning
confidence: 99%
“…This is why crossdevice research [8] investigates strategies to sense inter-device proximity and orientation. Some use computer vision of RGB [29], depth cameras [75], marker-based motion capture [65,107], marker recognition [59], polarization filters [74], or eye tracking [99]. Other techniques apply short-range infrared sensing [68], radio-based sensing such as bluetooth [22,42], or near-field communication (NFC, RFID) [77].…”
Section: Sensing Location and Orientationmentioning
confidence: 99%