2016
DOI: 10.1007/s11042-016-4223-3
|View full text |Cite
|
Sign up to set email alerts
|

Head-mounted gesture controlled interface for human-computer interaction

Abstract: This paper proposes a novel human-computer interaction system exploiting gesture recognition. It is based on the combined usage of an head-mounted display and a multi-modal sensor setup including also a depth camera. The depth information is used both to seamlessly include augmented reality elements into the real world and as input for a novel gesture-based interface. Reliable gesture recognition is obtained through a real-time algorithm exploiting novel feature descriptors arranged in a multi-dimensional stru… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
34
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
4
2

Relationship

1
9

Authors

Journals

citations
Cited by 75 publications
(34 citation statements)
references
References 35 publications
(37 reference statements)
0
34
0
Order By: Relevance
“… Interaction with a display/projection (Bolt, 1980, Choi et al, 2007, Foehrenbach et al, 2009, Beyer and Meier, 2011, Asadzadeh et al, 2012, Cauchard et al, 2012, Xie and Xu, 2013, Rossol et al, 2014, Saxen et al, 2014, Adeen et al, 2015, Braun et al, 2017, Osti et al, 2017, Dondi et al, 2018, Ma et al, 2018.  Interaction with augmented reality (Reifinger et al, 2007, Lu et al, 2012, Bai et al, 2013, Hürst and van Wezel, 2013, Gangman and Yen, 2014, Adhikarla et al, 2015, Hernoux and Christmann, 2015, Shim et al, 2016, Saxen et al, 2014, Kim and Lee, 2016, Memo and Zanuttigh, 2018. Interaction with augmented reality included a variety of technologies that enable superimposed 3D representation of content and interaction with it.…”
Section: Manipulation/navigationmentioning
confidence: 99%
“… Interaction with a display/projection (Bolt, 1980, Choi et al, 2007, Foehrenbach et al, 2009, Beyer and Meier, 2011, Asadzadeh et al, 2012, Cauchard et al, 2012, Xie and Xu, 2013, Rossol et al, 2014, Saxen et al, 2014, Adeen et al, 2015, Braun et al, 2017, Osti et al, 2017, Dondi et al, 2018, Ma et al, 2018.  Interaction with augmented reality (Reifinger et al, 2007, Lu et al, 2012, Bai et al, 2013, Hürst and van Wezel, 2013, Gangman and Yen, 2014, Adhikarla et al, 2015, Hernoux and Christmann, 2015, Shim et al, 2016, Saxen et al, 2014, Kim and Lee, 2016, Memo and Zanuttigh, 2018. Interaction with augmented reality included a variety of technologies that enable superimposed 3D representation of content and interaction with it.…”
Section: Manipulation/navigationmentioning
confidence: 99%
“…Datasets. We follow GestureGAN [10] and employ the NTU Hand Digit [48] and Creative Senz3D [49] datasets to evaluate the proposed AsymmetricGAN. The number of train/test image pair for the NTU Hand Digit and Creative Senz3D datasets are 75,036/9,600 and 135,504/12,800, respectively.…”
Section: B Hand Gesture-to-gesture Translation Taskmentioning
confidence: 99%
“…A detailed analysis of the various error sources has been presented in [19] while [20] focuses on the effects of the reflectivity of the scene on the depth accuracy. There exist large datasets acquired with ToF sensors for other computer vision applications like semantic segmentation [21,22], gesture recognition [23] and face recognition [24], but they all lack ground truth depth data that is very time consuming to acquire. For this reason the confidence of ToF data is typically computed with deterministic schemes.…”
Section: Related Workmentioning
confidence: 99%