2015
DOI: 10.1007/s00371-015-1129-4
|View full text |Cite
|
Sign up to set email alerts
|

Visual saliency guided textured model simplification

Abstract: The full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-pro t purposes provided that:• a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders.Please consult the full DRO policy… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 38 publications
0
6
0
Order By: Relevance
“…For future work, we plan to use the Spatio‐temporal Edge Difference (STED) metric to measure our results. Also, we would like to consider using deep learning methods to detect saliency of a DMS, formulating a method to improve DMS compression rate.…”
Section: Discussionmentioning
confidence: 99%
“…For future work, we plan to use the Spatio‐temporal Edge Difference (STED) metric to measure our results. Also, we would like to consider using deep learning methods to detect saliency of a DMS, formulating a method to improve DMS compression rate.…”
Section: Discussionmentioning
confidence: 99%
“…Yang et al proposed that saliency can also be used for simplification of 3D textured models [13]. Non-linear filters such as bilateral filter, difference of Gaussian filter, Kuwahara filter and morphological filters, and partial differential equation based methods such as anisotropic diffusion ( [14], [15]), and mean curvature flow have also actively been used recently for image abstraction [16].…”
Section: Related Workmentioning
confidence: 99%
“…The authors introduce a multi‐scale shape descriptor to estimate saliency locally, and in a rotationally invariant way. Yang et al [YLW*16] combine mesh saliency with texture contrast resulting in saliency texture, which is used to simplify textured models (Fig. ).…”
Section: Model‐based Perceptual Approachesmentioning
confidence: 99%
“…Mesh saliency method by Yang et al [YLW*16]. The approach takes a textured mesh (a),(b) and measures local geometric entropy (c), color and intensity (d).…”
Section: Model‐based Perceptual Approachesmentioning
confidence: 99%