2017
DOI: 10.1063/1.5005483
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive multimodal interaction in mobile augmented reality: A conceptual framework

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 14 publications
(22 reference statements)
0
3
0
Order By: Relevance
“…To name a few, the input can be an image target [2], [3] an object target [4] or location [5], [6]. The interaction in the AR environment can also be added with speech [7], [8] gesture [9] or a combination of both [10].…”
Section: Introductionmentioning
confidence: 99%
“…To name a few, the input can be an image target [2], [3] an object target [4] or location [5], [6]. The interaction in the AR environment can also be added with speech [7], [8] gesture [9] or a combination of both [10].…”
Section: Introductionmentioning
confidence: 99%
“…This section will discuss a comparative analysis between the proposed adaptive framework between three multimodal frameworks [43], [44], and [45] as shown in Table .3. This comparison relies on the of the differences between multimodal framework's properties of the modality data type & modality number, data fusion level, interpreted context considerable, experimental dataset, and weaknesses.…”
Section: A Comparative Analysis Between Proposed Adaptive Multimodal ...mentioning
confidence: 99%
“…The proposed adaptive framework can solve many previous drawbacks in [43], [44], and [45]. Tracing: based on data modality input (the tracing may be an image or an interval of features that can convert into numerical vectors and check classifier with proposed output vector of all features of all objects).…”
Section: A Comparative Analysis Between Proposed Adaptive Multimodal ...mentioning
confidence: 99%