2019
DOI: 10.1007/s00521-019-04521-1
|View full text |Cite
|
Sign up to set email alerts
|

3D visual saliency and convolutional neural network for blind mesh quality assessment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
2

Relationship

3
6

Authors

Journals

citations
Cited by 23 publications
(16 citation statements)
references
References 40 publications
0
15
0
Order By: Relevance
“…We finally fine-tuned each model to adapt their weights to our context. It is worth noting that the target of each stacked patch was the subjective score of the whole distorted PC as commonly employed to estimate the quality of several multimedia content for 2D images [20,21] as well as for stereo images [22] and 3D meshes [23].…”
Section: Cnn Models and Patch Quality Indexesmentioning
confidence: 99%
“…We finally fine-tuned each model to adapt their weights to our context. It is worth noting that the target of each stacked patch was the subjective score of the whole distorted PC as commonly employed to estimate the quality of several multimedia content for 2D images [20,21] as well as for stereo images [22] and 3D meshes [23].…”
Section: Cnn Models and Patch Quality Indexesmentioning
confidence: 99%
“…To detect these tactile points, deep learning associated to learningto-rank methods were used on collected crowdsourcing data. Always based on saliency, Abouelaziz et al [35,36] proposed a no-reference deep learning approach where the convolutional network is fed with 2D saliency patches in order to estimate the visual quality of a 3D mesh.…”
Section: D Mesh Visual Saliency Approaches In the State-of-the-artmentioning
confidence: 99%
“…image [3], [17], [18], stereo [62], 3D meshes [? ], [1], etc. ), remote sensing [30], watermarking [34], map viewing [51] [5], indoor localization [29], perception [14], image enhancement [12], [19], healthcare [38] among many others.…”
Section: Introductionmentioning
confidence: 99%