2016
DOI: 10.4218/etrij.16.0115.0631
|View full text |Cite
|
Sign up to set email alerts
|

blur invariant feature descriptor using multidirectional integral projection

Abstract: Feature detection and description are key ingredients of common image processing and computer vision applications. Most existing algorithms focus on robust feature matching under challenging conditions, such as inplane rotations and scale changes. Consequently, they usually fail when the scene is blurred by camera shake or an object's motion. To solve this problem, we propose a new feature description algorithm that is robust to image blur and significantly improves the feature matching performance. The propos… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 24 publications
0
5
0
Order By: Relevance
“…For instance, Tombari et al [46] developed more robust descriptors for identifying textureless image pairs, while Lee et al [47] proposed a descriptor with blur invariance by combining integral projections in four directions. However, these targeted descriptors only focus on specific challenging conditions and require extensive experimentation to enhance their matching robustness against various challenging factors.…”
Section: Feature Descriptionmentioning
confidence: 99%
“…For instance, Tombari et al [46] developed more robust descriptors for identifying textureless image pairs, while Lee et al [47] proposed a descriptor with blur invariance by combining integral projections in four directions. However, these targeted descriptors only focus on specific challenging conditions and require extensive experimentation to enhance their matching robustness against various challenging factors.…”
Section: Feature Descriptionmentioning
confidence: 99%
“…This is because EAS is more likely to detect keypoints in flat regions while the current descriptors rely largely on the neighboring gradients of keypoints. To alleviate this problem, a possible solution is to combine EAS with blur-robust feature descriptors (e.g., [50]), which will be investigated in the future.…”
Section: Application To Feature Matching With Real Datamentioning
confidence: 99%
“…Moreover, the special filtering operation improves the localization accuracy and distinctiveness in-case of blur images. The work in [38] has used L0 gradient minimization [39] to make the descriptor blur invariant. The L0 gradient minimization based filtering was applied, but better results were obtained by using guided image filtering [40] in the proposed scheme.…”
Section: A Texture Smoothingmentioning
confidence: 99%