2008 15th IEEE International Conference on Image Processing 2008
DOI: 10.1109/icip.2008.4711952
|View full text |Cite
|
Sign up to set email alerts
|

A supervised texture-based active contour model with linear programming

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 12 publications
0
3
0
Order By: Relevance
“…Unsupervised methods based AC does not use any prior knowledge. In contrast, supervised method based AC rests on a prior knowledge about the type and number of textures to be segmented [2,6,7,12]. Basically, supervised texture segmentation consists of discriminating texture features to build a partition of the image into homogeneous regions [8,12].…”
Section: Introductionmentioning
confidence: 94%
See 1 more Smart Citation
“…Unsupervised methods based AC does not use any prior knowledge. In contrast, supervised method based AC rests on a prior knowledge about the type and number of textures to be segmented [2,6,7,12]. Basically, supervised texture segmentation consists of discriminating texture features to build a partition of the image into homogeneous regions [8,12].…”
Section: Introductionmentioning
confidence: 94%
“…The region-based term is usually expressed as a domain integral of the region descriptor / in out T V where homogeneity criterion: (12) In the general case the function / …”
Section: Region Descriptormentioning
confidence: 99%
“…The segmentation algorithms can be widely categorized into three types viz, supervised, unsupervised and interactive. Supervised segmentation methods need manually labeled training data for recognizing specific region of interest in images, which may restrict the scope of this method [1][2][3][4]. Unsupervised (automatic) methods provide segmentation results without prior information about the input images and don't require manual intervention [5].…”
Section: Introductionmentioning
confidence: 99%