2016
DOI: 10.1007/s11263-016-0898-1
|View full text |Cite
|
Sign up to set email alerts
|

Foveated Nonlocal Self-Similarity

Abstract: When we gaze a scene, our visual acuity is maximal at the fixation point (imaged by the fovea, the central part of the retina) and decreases rapidly towards the periphery of the visual field. This phenomenon is known as foveation. We investigate the role of foveation in nonlocal image filtering, installing a different form of self-similarity: the foveated self-similarity. We consider the image denoising problem as a simple means of assessing the effectiveness of descriptive models for natural images and we sho… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
18
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(18 citation statements)
references
References 111 publications
(137 reference statements)
0
18
0
Order By: Relevance
“…Therefore, when we watch a point in an image, this point will have the highest sensitivity and the sensitivity will drop with the increasing distance to the point. Inspired by the characteristics of the HVS, Alessandro Foi et al [31] have proposed calculating the patch similarity based on the Euclidean distance d FOV between the the foveated patches, defined as:d FOV(I, x1, x2) = Ix1 FOVIx2 FOV22 where Ix1 FOV and Ix2 FOV denote the foveated patches that were obtained by foveating the image I at the two fixation points x1 and x2. By applying the foveation operator F to the image I , the foveated patch Ix FOV is produced as:Ix …”
Section: Methodsmentioning
confidence: 99%
“…Therefore, when we watch a point in an image, this point will have the highest sensitivity and the sensitivity will drop with the increasing distance to the point. Inspired by the characteristics of the HVS, Alessandro Foi et al [31] have proposed calculating the patch similarity based on the Euclidean distance d FOV between the the foveated patches, defined as:d FOV(I, x1, x2) = Ix1 FOVIx2 FOV22 where Ix1 FOV and Ix2 FOV denote the foveated patches that were obtained by foveating the image I at the two fixation points x1 and x2. By applying the foveation operator F to the image I , the foveated patch Ix FOV is produced as:Ix …”
Section: Methodsmentioning
confidence: 99%
“…The effect of BM3D 5 and FNLM 6 denoising on the ability of descriptors to discriminate distinct local structure/texture was investigated using two pairs of OCT images, each acquired from the same patient. The ability of descriptors d i and d j in images I i and I j was quantified using mutual descriptiveness, MD(bold-italicdi,bold-italicdj,x,scriptNq)=Ui(bold-italicdi,bold-italicdj,x,scriptNq)+Uj(bold-italicdj,bold-italicdi,x,scriptNq)2+2H(bold-italicdi(x),bold-italicdj(x))[0,1] combining the Huber dissimilarity measure H and the uniqueness U i determined by the similarity of a descriptor to its neighboring descriptors defined in a neighborhood scriptNq in I i and I j as Ui(bold-italicdi,bold-italicdj,x,scriptNq)=12|scriptNq|qscriptNq…”
Section: Methodsmentioning
confidence: 99%
“…The registration method and D-MINDs are described in Section 2. Since OCT exhibits poor signal-to-noise ratio, 4 the effects of denoising methods [i.e., block-matching and 3D (BM3D) filtering 5 and foveated nonlocal means (FNLM) filtering 6 ] on the ability of descriptors to encode local structure and/or texture in images are presented in Sections 3 and 4. The sections additionally demonstrate the performance of the D-MIND Demons method in cross-scanner intra- and inter-subject registration using clinical OCT data compared to its variants––i.e., integrating D-MINDs with other descriptors including average image gradients (AG), compact histograms of image orientation (CHO), co-occurrence (COC) texture features, 7 and run length (RL) texture features 8 .…”
Section: Introductionmentioning
confidence: 99%
“…Clearly, more complex anisotropic patch shapes could have been used [30]. We decided to consider this anisotropy as a good trade off between accuracy and complexity.…”
Section: E Introduction Of the Anisotropymentioning
confidence: 99%