“…According to these studies, texture can be conceptualised as a statistical or geometric repetition of primitive descriptors (micro patterns) in the image and specific measures such as roughness, regularity, linearity, frequency, directionality, granularity and density can be employed to attain texture discrimination (one prominent example is the theory of textons that has been proposed by Julesz (1981) in the early 1980s). While the surfaces of the imaged objects are often defined by an unbounded variety of textures, the task relating to the identification of the optimal texture analysis approach proved extremely challenging and the substantial efforts devoted by the vision community in the field of texture analysis were justified, as the availability of a robust texture descriptor will be extremely beneficial for a large spectrum of applications (Manjunath and Ma, 1996;Kovalev et al, 2001;Mäenpää et al, 2003;Nammalwar et al, 2003;Ghita et al, 2005;Rodriguez and Marcel, 2006;Xie and Mirmehdi, 2007;Tosun and Gunduz-Demir, 2011). Due to its intrinsic complexity, this fundamental image property has been researched for a number of decades and there is a large degree of consensus among vision researchers that texture analysis can be divided into four major categories: statistical, model-based, signal processing and structural, with statistical and signal processing techniques being the most investigated.…”