2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) 2018
DOI: 10.1109/ismar-adjunct.2018.00131
|View full text |Cite
|
Sign up to set email alerts
|

TutAR: Semi-Automatic Generation of Augmented Reality Tutorials for Medical Education

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 6 publications
0
3
0
Order By: Relevance
“…Mobile AR applications have been used to guide tourists [9], to assist disabled people [10], and to help blind people navigate indoor spaces [11]. Eckhoff et al [12] developed TutAR as a medical education tool that takes video and hand motions as input and generates a 3D animated hand. Indoor navigation applications have been developed for tasks such as food delivery through robots using deep learning and micro-electromechanical systems (MEMS) sensors [13,14].…”
Section: Related Workmentioning
confidence: 99%
“…Mobile AR applications have been used to guide tourists [9], to assist disabled people [10], and to help blind people navigate indoor spaces [11]. Eckhoff et al [12] developed TutAR as a medical education tool that takes video and hand motions as input and generates a 3D animated hand. Indoor navigation applications have been developed for tasks such as food delivery through robots using deep learning and micro-electromechanical systems (MEMS) sensors [13,14].…”
Section: Related Workmentioning
confidence: 99%
“…As we explained in our related work section, there is little reference on how to design the AR content for an educational curriculum, as most classroom implementations were done using an empirical approach and typically focused on how to integrate AR into the classroom, rather than how to customize the AR content itself. Thus, we decided to approach the design with an emergent coding approach (Blair, 2015), in which we clustered the types of microskills we could recognize in AR: (1) Perceptual, which refers to the time specific knowledge designed to attract the attention of the user and deliver visual information (Hoffmann et al, 2008;Kishishita et al, 2014;Lee et al, 2019;Rusch et al, 2013;Schwerdtfeger & Klinker, 2008;Steinberger et al, 2011;Volmer et al, 2018;Waldner et al, 2014); (2) Cognitive, which refers to the time specific knowledge to generate and collect information from the users' working memory (Beheshti et al, 2017;Cai et al, 2014;Chan et al, 2013;Kapp et al, 2019;Knierim et al, 2018;Prilla, 2019;Strzys et al, 2017); (3) Motor, which refers to the time specific knowledge to properly perform an operation or process (Bhattacharya & Winer, 2019;Eckhoff et al, 2018;Gavish et al, 2015;Mohr et al, 2017;Wang et al, 2016;Webel et al, 2013;Westerfield et al, 2015). In Table 1, we go into further detail on the educational purposes for each type of microskill and guides on how to translate it into AR in terms of content design.…”
Section: Design Microskills In An Ar Environmentmentioning
confidence: 99%
“…The creation of tutorials for software has traditionally been completed by either user demonstration [3,7,13,28] or crowdsourcing [12,20]. Tutorials generated by demonstration usually require tracing a user's interaction flow while recording a screencast video of the user interface.…”
Section: Structural Tutorial Creationmentioning
confidence: 99%