“…Each keypoint-region is cut-out and resampled and Gaussian smoothed relative to the octave where that particular keypoint was detected. A rotation invariant descriptor, Invariant Features of Local Textures (IFLT) [12] descriptors are derived for every keypoint-region thereby eliminating the dependency on rotation normalisation. Thus, we can afford to discard duplicate keypoints.…”
Section: Proposed Frameworkmentioning
confidence: 99%
“…Invariant Features of Local Textures (IFLT) [12] is a texture descriptor that is rotation and partially-illumination 1 invariant. First order finite directional differences in all directions with respect to a centre pixel are calculated, and Euclidean-normalised and Haar-wavelet filtered to obtain a texture measure.…”
Section: Proposed Frameworkmentioning
confidence: 99%
“…In this work, we use 128 bin histograms. The reader is referred to [12] for details. A simple flow-diagram of IFLT is provided in Figure 2.…”
Section: Proposed Frameworkmentioning
confidence: 99%
“…A simple flow-diagram of IFLT is provided in Figure 2. These descriptors are inherently intensity and rotation invariant for a small 3 × 3 neighborhood of pixels [12]. The directional differences retain the structure integrity which can be harnessed for image-matching purposes.…”
Section: Proposed Frameworkmentioning
confidence: 99%
“…Can duplicate keypoints be discarded? In this paper, we propose a framework for imagematching using only unique keypoints, consisting of Hessian-Laplacian detectors and Invariant Features of Local Textures (IFLT) [12] descriptors. We present the new framework in section II and followed by experimental analysis in section III.…”
In this paper we show that the features generated by the recently presented Invariant Features of Local Textures (IFLT) technique can be used in a SIFT like framework to deliver real-time pointwise image matching with performance comparable to existing state-of-the-art image matching systems. The proposed framework is also capable of saving considerable amount of computation time.
“…Each keypoint-region is cut-out and resampled and Gaussian smoothed relative to the octave where that particular keypoint was detected. A rotation invariant descriptor, Invariant Features of Local Textures (IFLT) [12] descriptors are derived for every keypoint-region thereby eliminating the dependency on rotation normalisation. Thus, we can afford to discard duplicate keypoints.…”
Section: Proposed Frameworkmentioning
confidence: 99%
“…Invariant Features of Local Textures (IFLT) [12] is a texture descriptor that is rotation and partially-illumination 1 invariant. First order finite directional differences in all directions with respect to a centre pixel are calculated, and Euclidean-normalised and Haar-wavelet filtered to obtain a texture measure.…”
Section: Proposed Frameworkmentioning
confidence: 99%
“…In this work, we use 128 bin histograms. The reader is referred to [12] for details. A simple flow-diagram of IFLT is provided in Figure 2.…”
Section: Proposed Frameworkmentioning
confidence: 99%
“…A simple flow-diagram of IFLT is provided in Figure 2. These descriptors are inherently intensity and rotation invariant for a small 3 × 3 neighborhood of pixels [12]. The directional differences retain the structure integrity which can be harnessed for image-matching purposes.…”
Section: Proposed Frameworkmentioning
confidence: 99%
“…Can duplicate keypoints be discarded? In this paper, we propose a framework for imagematching using only unique keypoints, consisting of Hessian-Laplacian detectors and Invariant Features of Local Textures (IFLT) [12] descriptors. We present the new framework in section II and followed by experimental analysis in section III.…”
In this paper we show that the features generated by the recently presented Invariant Features of Local Textures (IFLT) technique can be used in a SIFT like framework to deliver real-time pointwise image matching with performance comparable to existing state-of-the-art image matching systems. The proposed framework is also capable of saving considerable amount of computation time.
Previous work has shown that the uniformity recognition of nonwoven can be considered as a special case of pattern recognition. In this paper, a generalized frame for uniformity recognition based on computer vision and pattern recognition is introduced briefly. To validate the proposed generalized frame, a case study id carried out in experiment. In the experiment section, the uniformity recognition of nonwovens will be solved by unifying wavelet texture analysis, generalized Gaussian density (GGD) model and learning vector quantization (LVQ) neural network. 625 nonwoven images of 5 different uniformity grades, 125 of each grade, are decomposed at four levels with five different wavelet bases of Symlets family. And wavelet coefficients in each subband are independently modeled by the GGD model, while the scale and shape parameters of GGD model are extracted using maximum likelihood (ML) estimator as features to train and test LVQ neural network. For comparison, two energy-based features are also extracted from wavelet coefficients directly and jointly used as textural features. Experimental results coming from 625 nonwoven samples indicate the GGD parameters are more expressive and powerful in characterizing textures than the energy-based ones, especially when the number of decomposition levels is 4.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.