2016
DOI: 10.48550/arxiv.1610.05712
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Fast L1-NMF for Multiple Parametric Model Estimation

Mariano Tepper,
Guillermo Sapiro

Abstract: In this work we introduce a comprehensive algorithmic pipeline for multiple parametric model estimation. The proposed approach analyzes the information produced by a random sampling algorithm (e.g., RANSAC) from a machine learning/optimization perspective, using a parameterless biclustering algorithm based on L1 nonnegative matrix factorization (L1-NMF). The proposed framework exploits consistent patterns that naturally arise during the RANSAC execution, while explicitly avoiding spurious inconsistencies. Cont… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 28 publications
0
2
0
Order By: Relevance
“…Outlier rejection. Following the route of the a-contrario approaches, outlying models are pruned using the statistical validation technique described in [23], the main difference is that we use it as a post-processing outlier rejection criteria on the attained models rather than as a pre-processing refinement of sampled model hypotheses. The idea is to compute the distribution of the cardinality of the consensus set of a model varying its inlier threshold and to express its unlikeliness in terms of NFA.…”
Section: Table 1: Model Selection Parametersmentioning
confidence: 99%
“…Outlier rejection. Following the route of the a-contrario approaches, outlying models are pruned using the statistical validation technique described in [23], the main difference is that we use it as a post-processing outlier rejection criteria on the attained models rather than as a pre-processing refinement of sampled model hypotheses. The idea is to compute the distribution of the cardinality of the consensus set of a model varying its inlier threshold and to express its unlikeliness in terms of NFA.…”
Section: Table 1: Model Selection Parametersmentioning
confidence: 99%
“…Information criterion on the number of biclusters K. Aside from the statistical-test-based methods, some studies have proposed to determine K based on the minimum description length [44,50,57] and modified DIC for the biclustering problem [11,12]. Particularly, under the regular grid constraint of the bicluster structure, an information criterion called integrated completed likelihood (ICL) has been proposed for determining K [15,33,56], which approximates the maximum marginal likelihood of a given K. These methods aim to select the optimal number of biclusters K from a given set of candidates, in terms of some criterion (e.g., marginal likelihood).…”
Section: Introductionmentioning
confidence: 99%