2015
DOI: 10.1016/j.ins.2014.10.029
|View full text |Cite
|
Sign up to set email alerts
|

Feature Guided Biased Gaussian Mixture Model for image matching

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 20 publications
(5 citation statements)
references
References 55 publications
0
5
0
Order By: Relevance
“…The straightforward consensus algorithm, MV, has been thoroughly studied [15] [32]. Besides MV, many consensus algorithms utilize the inference of a useful probabilistic model for estimating latent variables [1] [39]. The classic approach, DS [5] proposed by Dawid and Skene, models a confusion matrix of each annotator and the class prior and uses an EM procedure to infer the estimated labels of examples.…”
Section: Related Workmentioning
confidence: 99%
“…The straightforward consensus algorithm, MV, has been thoroughly studied [15] [32]. Besides MV, many consensus algorithms utilize the inference of a useful probabilistic model for estimating latent variables [1] [39]. The classic approach, DS [5] proposed by Dawid and Skene, models a confusion matrix of each annotator and the class prior and uses an EM procedure to infer the estimated labels of examples.…”
Section: Related Workmentioning
confidence: 99%
“…We can use any existing maximum clique-searching algorithm to solve Equation (20). As far as we know, a recent MC algorithm proposed in [38] can efficiently solve the maximum clique problem, which combines tree searching with efficient bounding and pruning based on graph coloring.…”
Section: Graph Construction and Maximum Clique Algorithmmentioning
confidence: 99%
“…In general, feature-based methods mainly contain two steps: feature extraction and feature matching. In recent years, some feature-based methods [13][14][15][16][17][18][19][20][21][22] have been proposed. These algorithms have some common steps including keypoint detection, keypoint description, and keypoint matching.…”
Section: Introductionmentioning
confidence: 99%
“…In the shallow layers of the network, it is hard to learn appropriate weight because the features in these layers are less recognizable. In fact, Lowe Ratio (Lowe 2004), i.e., the side information generated during feature matching, is proved to be powerful prior information to determine the confidence of each point being inlier (Goshen and Shimshoni 2008;Brahmachari and Sarkar 2009;Sun et al 2015;Tao and Sun 2014). Based on this observation, we propose a Bayesian attentive context normalization (BACN) to mine prior information for better reducing the noise of outliers to global context.…”
Section: Introductionmentioning
confidence: 99%