This paper empirically compares nine image dissimilarity measures that are based on distributions of color and texture features summarizing over 1,000 CPU hours of computational experiments. Ground truth is collected via a novel random sampling scheme for color, and via an image partitioning method for texture. Quantitative performance evaluations are given for classification, image retrieval, and segmentation tasks, and for a wide variety of dissimilarity measures. It is demonstrated how the selection of a measure, based on large scale evaluation, substantially improves the quality of classification, retrieval, and unsupervised segmentation of color and texture images.
We present a novel optimization framework for unsupervised texture segmentation that relies on statistical tests as a measure of homogeneity. Texture segmentation is formulated as a data clustering problem based on sparse proximity data. Dissimilarities of pairs of textured regions are computed from a multi scale Gabor lter image representation. We discuss and compare a class of clustering objective functions which is systematically derived from invariance principles. As a general optimization framework we propose deterministic annealing based on a mean eld approximation. The canonical way to derive clustering algorithms within this framework as well as an e cient implementation of mean eld annealing and the closely related Gibbs sampler are presented. We apply both annealing variants to Brodatz like micro texture mixtures and real word images.
Image quantization and digital halftoning, two fundamental image processing problems, are generally performed sequentially and, in most cases, independent of each other. Color reduction with a pixel-wise defined distortion measure and the halftoning process with its local averaging neighborhood typically optimize different quality criteria or, frequently, follow a heuristic approach without reference to any quantitative quality measure. In this paper, we propose a new model to simultaneously quantize and halftone color images. The method is based on a rigorous cost-function approach which optimizes a quality criterion derived from a simplified model of human perception. It incorporates spatial and contextual information into the quantization and thus overcomes the artificial separation of quantization and halftoning. Optimization is performed by an efficient multiscale procedure which substantially alleviates the computational burden. The quality criterion and the optimization algorithms are evaluated on a representative set of artificial and real-world images showing a significant image quality improvement compared to standard color reduction approaches. Applying the developed cost function, we also suggest a new distortion measure for evaluating the overall quality of color reduction schemes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.