2005
DOI: 10.1201/9781420026986.ch6
|View full text |Cite
|
Sign up to set email alerts
|

Entropic Graphs for Registration

Abstract: In many applications, fusion of images acquired via two or more sensors requires image alignment to an identical pose, a process called image registration. Image registration methods select a sequence of transformations to maximize an image similarity measure. Recently a new class of entropic-graph similarity measures was introduced for image registration, feature clustering and classification. This chapter provides an overview of entropic graphs in image registration and demonstrates their performance advanta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2006
2006
2013
2013

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 17 publications
(15 citation statements)
references
References 64 publications
0
15
0
Order By: Relevance
“…Minimal spanning tree (MST) [12] and k-nearest neighbor (kNN) [11,15] are among different methods for estimating α-MI from multi-feature samples. With N samples, the complexities of constructing MST and kNN graphs are O(N 2 log N ) and O(N log N ) respectively [14]. Therefore, we choose kNN.…”
Section: Self-similarity α-Mi (Sesami)mentioning
confidence: 99%
See 1 more Smart Citation
“…Minimal spanning tree (MST) [12] and k-nearest neighbor (kNN) [11,15] are among different methods for estimating α-MI from multi-feature samples. With N samples, the complexities of constructing MST and kNN graphs are O(N 2 log N ) and O(N log N ) respectively [14]. Therefore, we choose kNN.…”
Section: Self-similarity α-Mi (Sesami)mentioning
confidence: 99%
“…α-mutual information (α-MI) similarity metric [11,12,13,14,15] is also graph based and has been recently shown to outperform MI in nonrigid registration applications. Therefore, we choose to incorporate self-similarity into this powerful registration framework.…”
Section: Introductionmentioning
confidence: 99%
“…Note that in such a minimum spanning tree, the mixed dataset samples that share edges only with other mixed dataset samples would be the ones that reduce a graph-theoretic estimate of the Henze-Penrose affinity between the datasets [12,16,13]. Note also that this strategy produces only a single false alarm rate and a single detection rate as the detection rule cannot be varied by changing a threshold as in the previous two cases, though a receiver operating characteristics curve can be constructed by joining the paired false alarm and detection rates with the origin on one side, and paired unit false alarm and detection rates on the other.…”
Section: Detection Performance On Synthetic Datamentioning
confidence: 99%
“…The significance of a as the specificity threshold is in ensuring that no more than a fraction a of samples in either specific group is included in that group erroneously. A second measure of class overlap can be defined in inspiration from the Henze-Penrose affinity [12,13] that computes the integral Z x 2p 1 ðxÞp 2 ðxÞ p 1 ðxÞþp 2 ðxÞ dx for any given probability distributions p 1 (x) and p 2 (x), and goes to 1 when p 1 (x) ¼p 2 (x) for all x. We define the measure M HP-like (x) for a sample x as a variant of the integrand above by…”
Section: Class Overlap Measures Based On Posterior Distribution Estimmentioning
confidence: 99%
“…Whereas mutual information and its variations arise as a first choice [8], generalized information measures, such as f -informations [11,12] and Jensen f -divergences [9], have been shown to be relevant alternatives in specific clinical contexts.…”
Section: Introductionmentioning
confidence: 99%