We present an approach based on multiple representations and multiple queries to tackle the problem of invariance in the framework of content-based image retrieval. We consider the case of textures. This approach, rather than to consider invariance at the representation level, considers it at the query level. We use two models to represent the visual content of textures, namely the autoregressive model and a perceptual model based on a set of perceptual features. The perceptual model is used with two viewpoints: the original image's viewpoint and the autocovariance function viewpoint. After a brief presentation and discussion of these multiple representation models/viewpoints, we present the invariant texture retrieval algorithm. This algorithm considers results fusion (merging) at two different levels: 1. The first level consists in merging results returned by different models/viewpoints (representations) for the same query in one results list using a linear results fusion model; 2. The second level consists in merging each fused list of different queries into a unique fused list using a round robin fusion scheme. Experimental retrieval results and benchmarking based on the precision/recall measures applied on a large image database show interesting results.
,QWURGXFWLRQContent-based image retrieval (CBIR) is a research area that has been active for more than a decade and many research results and prototypes have been carried out since then