2019
DOI: 10.1137/18m1174544
|View full text |Cite
|
Sign up to set email alerts
|

An Optimization Parameter for Seriation of Noisy Data

Abstract: A square symmetric matrix is a Robinson similarity matrix if entries in its rows and columns are non-decreasing when moving towards the diagonal. A Robinson similarity matrix can be viewed as the affinity matrix between objects arranged in linear order, where objects closer together have higher affinity. We define a new parameter, Γ 1 , which measures how badly a given matrix fails to be Robinson similarity. Namely, a matrix is Robinson similarity precisely when its Γ 1 attains zero, and a matrix with small Γ … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 20 publications
(28 reference statements)
0
3
0
Order By: Relevance
“…In [GJ19], the authors introduced a function Γ 1 on the space of matrices, which attains 0 exactly when it is applied to a Robinson matrix. While Γ 1 is easy to compute, it fails to be continuous in cut-norm (or equivalently the graph limit topology).…”
Section: Similar Graph Parametersmentioning
confidence: 99%
“…In [GJ19], the authors introduced a function Γ 1 on the space of matrices, which attains 0 exactly when it is applied to a Robinson matrix. While Γ 1 is easy to compute, it fails to be continuous in cut-norm (or equivalently the graph limit topology).…”
Section: Similar Graph Parametersmentioning
confidence: 99%
“…Chepoi and Seston [10] presented a factor 16 approximation for the ℓ ∞ -fitting problem. For a similarity matrix A, Ghandehari and Janssen [18] introduced a parameter Γ 1 (A) and showed that one can construct a Robinson similarity R (with the same order of lines and columns as A) such that ||A − R|| 1 ≤ 26Γ 1 (A) 1 3 . The result of Atkins et al [3] in the case of the Fiedler vectors with no repeated values was generalized by Fogel et al [17] to the case when the entries of A are subject to a uniform noise or some entries are not given.…”
Section: Introductionmentioning
confidence: 99%
“…One's first thought for this problem is to simply compare every entry in the matrix with its preceding neighbors and add up the "errors"; clearly, you will only be Robinson if you end up with 0. In [22], they defined a parameter Γ1 that does this, and while Γ1 is easy to compute, it fails to be continuous in cut norm (or equivalently the graph limit topology). Thus Γ1 is not a suitable Robinson measurement for growing networks as it does not respect limits of graph sequences.…”
Section: Introductionmentioning
confidence: 99%