2014
DOI: 10.2197/ipsjtcva.6.143
|View full text |Cite
|
Sign up to set email alerts
|

Hyper-renormalization: Non-minimization Approach for Geometric Estimation

Abstract: Abstract:The technique of "renormalization" for geometric estimation attracted much attention when it appeared in early 1990s for having higher accuracy than any other then known methods. The key fact is that it directly specifies equations to solve, rather than minimizing some cost function. This paper expounds this "non-minimization approach" in detail and exploits this principle to modify renormalization so that it outperforms the standard reprojection error minimization. Doing a precise error analysis in t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
5
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 24 publications
1
5
0
Order By: Relevance
“…The curvature bias correction has not been explicitly represented: G 0 here is the uncorrected vector of coefficients. The results to this point are in practice very similar to those of [10], although there are two significant differences in approach. The first difference is that in this paper I have minimised the bias from each source separately: the normalisation matrix is chosen to remove normalisation bias; reweighting bias is minimised by evaluating the gradient at a point consistent with the current best-fit model; and the curvature bias is corrected as a separate, final step.…”
Section: Dependence On Measurement Noisesupporting
confidence: 65%
See 3 more Smart Citations
“…The curvature bias correction has not been explicitly represented: G 0 here is the uncorrected vector of coefficients. The results to this point are in practice very similar to those of [10], although there are two significant differences in approach. The first difference is that in this paper I have minimised the bias from each source separately: the normalisation matrix is chosen to remove normalisation bias; reweighting bias is minimised by evaluating the gradient at a point consistent with the current best-fit model; and the curvature bias is corrected as a separate, final step.…”
Section: Dependence On Measurement Noisesupporting
confidence: 65%
“…The first difference is that in this paper I have minimised the bias from each source separately: the normalisation matrix is chosen to remove normalisation bias; reweighting bias is minimised by evaluating the gradient at a point consistent with the current best-fit model; and the curvature bias is corrected as a separate, final step. In contrast, the method of [10] chooses the normalisation matrix to minimise all three biases simultaneously, including that from Sampson reweighting. One advantage of separate treatment is that the normalisation matrix may be calculated by the same method both before and after reweighting.…”
Section: )mentioning
confidence: 99%
See 2 more Smart Citations
“…Because of its complexity, we also propose a simplified version of Hyper LS that produces virtually the same results for relatively large sample sizes. Following [16,17], we call this method the Semi-Hyper method.…”
Section: Hyper-accurate Methodsmentioning
confidence: 99%