2015
DOI: 10.1007/978-3-319-24486-0_16
|View full text |Cite
|
Sign up to set email alerts
|

Information Preserving Dimensionality Reduction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 4 publications
0
2
0
Order By: Relevance
“…We associate with Z a distance function d Z : Z × Z → R, and for convenience write d Z (X 1 , X 2 ) for d Z (g(X 1 ), g(X 2 )). To characterize the behavior of label distributions with respect to the embedding space, we adapt the idea of conditional probabilistic Lipschitzness in [35,47] to present a variant of probabilistic bi-Lipschitzness: Definition 1. (L, M )-Lipschitzness.…”
Section: Preliminariesmentioning
confidence: 99%
See 1 more Smart Citation
“…We associate with Z a distance function d Z : Z × Z → R, and for convenience write d Z (X 1 , X 2 ) for d Z (g(X 1 ), g(X 2 )). To characterize the behavior of label distributions with respect to the embedding space, we adapt the idea of conditional probabilistic Lipschitzness in [35,47] to present a variant of probabilistic bi-Lipschitzness: Definition 1. (L, M )-Lipschitzness.…”
Section: Preliminariesmentioning
confidence: 99%
“…Theoretically, we provide a sequence of results evaluating how much signal we can extract from pre-trained embeddings using EPOXY. First, we define a notion of probabilistic Lipschitzness [35] to describe the label space's smoothness, with which we can tightly characterize the improvement in rate of convergence and generalization error of EPOXY in the size of the (unlabeled) data over WS without source extensions. We show that improvement depends on two factors: the increase in coverage from extending weak labels, and the accuracy of the extension.…”
Section: Introductionmentioning
confidence: 99%