2012
DOI: 10.1016/j.cag.2012.09.004
|View full text |Cite
|
Sign up to set email alerts
|

Beyond the mouse: Understanding user gestures for manipulating 3D objects from touchscreen inputs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
23
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 29 publications
(24 citation statements)
references
References 15 publications
1
23
0
Order By: Relevance
“…Our results contain both similarities and differences with Cohé's study [2]. Both studies determine how users intuitively manipulate 3D objects, where Cohé uses a 3D cube alone and we use more complex objects.…”
Section: Related Work Comparisonsupporting
confidence: 66%
See 2 more Smart Citations
“…Our results contain both similarities and differences with Cohé's study [2]. Both studies determine how users intuitively manipulate 3D objects, where Cohé uses a 3D cube alone and we use more complex objects.…”
Section: Related Work Comparisonsupporting
confidence: 66%
“…Cohé conducted a user study to examine how users perform rotations, scaling, and translations on a 3D cube [2]. Our work is different than Cohé's in that we have added different objects and tasks to perform as well as two trials of the experiment.…”
Section: User-defined Gesturesmentioning
confidence: 99%
See 1 more Smart Citation
“…It is common to incorporate the users to define the input systems and mainly grammars gestures. Cohé and Hachet www.ijacsa.thesai.org [6] conducted a user study to better understand how nontechnical users interact with a 3D object from touch-screen inputs. The experiment has been conducted while users manipulated a 3D cube with three points of view for rotations, scaling and translations (RST).…”
Section: Related Workmentioning
confidence: 99%
“…He developed au field experience with 20 participants. They presented them, like Cohé and Hachet [6], a set of 27 commands and they asked then to imagine corresponding gesture. From this analysis we extracted two specific items related to the selection of objects (Line 3 and 12).…”
Section: Fig 2 Z-technique and Multi-touch Viewport Techniquementioning
confidence: 99%