Abstract. It is a challenging and important task to retrieve images from a large and highly varied image data set based on their visual contents. Problems like how to fill the semantic gap between image features and the user have attracted a lot of attention from the research community. Recently, the 'bag of visual words' approach exhibits very good performance in content-based image retrieval (CBIR). However, since the 'bag of visual words' approach represents an image as an unordered collection of local descriptors which only use the intensity information, the resulting model provides little insight about the spatial constitution and color information of the image. In this paper, we develop a novel image representation method which uses Gaussian mixture model (GMM) to provide spatial weighting for visual words and apply this method to facilitate content based image retrieval. Our approach is a simple and more efficient compared with the order-less 'bag of visual words' approach. In our method, firstly, we extract visual tokens from the image data set and cluster them into a lexicon of visual words. Then, we represent the spatial constitution of an image as a mixture of n Gaussians in the feature space and decompose the image into n regions. The spatial weighting scheme is achieved by weighting visual words according to the probability of each visual word belonging to each of the n regions in the image. The cosine similarity between spatial weighted visual word vectors is used as distance measurement between regions, while the image-level distance is obtained by averaging the pair-wise distances between regions. We compare the performance of our method with the traditional 'bag of visual words' and 'blobworld' approaches under the same image retrieval scenario. Experimental results demonstrate that the our method is able to tell images apart in the semantic level and improve the performance of CBIR.
Many feature selection methods have been proposed and most of them are in the supervised learning paradigm. Recently unsupervised feature selection has attracted a lot of attention especially in bioinformatics and text mining. So far, supervised feature selection and unsupervised feature selection method are studied and developed separately. A subset selected by a supervised feature selection method may not be a good one for unsupervised learning and vice verse. In bioinformatics research, however it is very common to perform clustering and classification iteratively for the same data sets, especially in gene expression analysis, thus it is very desirable to have a feature selection method which works well for both unsupervised learning and supervised learning. In this paper we propose a novel feature selection algorithm through feature clustering. Our algorithm does not need the class label information in the data set and is suitable for both supervised learning and unsupervised learning. Our algorithm groups the features into different clusters based on feature similarity, so that the features in the same clusters are similar to each other. A representative feature is selected from each cluster, thus reduces the feature redundancy. Our feature selection algorithm uses feature similarity for feature redundancy reduction but requires no feature search, works very well for high dimensional data set. We test our algorithm on some biological data sets for both clustering and classification analysis and the results indicates that our FSFC algorithm can significantly reduce the original data sets without scarifying the quality of clustering and classification .
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.