2016
DOI: 10.12720/jait.7.3.182-185
|View full text |Cite
|
Sign up to set email alerts
|

User-Driven Multimedia Adaptation Framework for Context-aware Learning Content Service

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
1
1
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 4 publications
0
3
0
Order By: Relevance
“…By viewing this consideration as a basic problem of multi-criteria decision making (MCDM) system, it looks promising that an optimal set of required solutions can be resulted. Our focus is on the Analytical Hierarchical Process (AHP) method [10], [11] since it is convenient to use and allows both qualitative and quantitative factors to be computed nicely together.…”
Section: Ahp Methodsmentioning
confidence: 99%
“…By viewing this consideration as a basic problem of multi-criteria decision making (MCDM) system, it looks promising that an optimal set of required solutions can be resulted. Our focus is on the Analytical Hierarchical Process (AHP) method [10], [11] since it is convenient to use and allows both qualitative and quantitative factors to be computed nicely together.…”
Section: Ahp Methodsmentioning
confidence: 99%
“…Moreover, as mobile users are increasingly becoming quality-aware, the framework integrates novel mechanisms for decreasing the video quality in a controlled way, with the aim to support a good learner quality of experience (QoE) even in resource-constrained situations. In [19], the authors propose an application-layer and middleware-based solutions that increase network reliability and flexibility and pro- In [20], The authors present the architecture of an adaptive multimedia learning service, where their engine enables users to identify the best combination of adaptive features of visual and audio content.…”
Section: Full-reference Image Quality Assessmentmentioning
confidence: 99%
“…Their use in many real-time streaming scenarios could be unpractical, especially when the original image is not present [18][19][20] They provide an end-to-end quality assessment framework that could guarantee a high level of QoS They assessed a limited number of features [21][22][23] They train a regression model to predict the image quality score using multiple features in order to identify the remaining useful images' features when adapting the content No-Reference IQA [29][30][31] A universal 512-D face feature representation is provided to measure the quality of a given face Even though these techniques achieve [26,[34][35][36] The authors used the FR-IQA methods to annotate and train their CNN models.…”
Section: No-reference Image Quality Assessmentmentioning
confidence: 99%