2020
DOI: 10.48550/arxiv.2011.14688
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Can neural networks learn persistent homology features?

Abstract: Topological data analysis uses tools from topology -the mathematical area that studies shapes -to create representations of data. In particular, in persistent homology, one studies one-parameter families of spaces associated with data, and persistence diagrams describe the lifetime of topological invariants, such as connected components or holes, across the one-parameter family. In many applications, one is interested in working with features associated with persistence diagrams rather than the diagrams themse… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1
1

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 10 publications
0
7
0
Order By: Relevance
“…Note, however, that the stability results also depend on the choice of filtration, persistence signature and metric [84]. Finally, we note that there has recently been a lot of effort in trying to train neural networks to learn what the best PH signature is for specific types of applications [19,36,59,74].…”
Section: E2 Discussion Of Ph Pipeline For Other Applicationsmentioning
confidence: 99%
“…Note, however, that the stability results also depend on the choice of filtration, persistence signature and metric [84]. Finally, we note that there has recently been a lot of effort in trying to train neural networks to learn what the best PH signature is for specific types of applications [19,36,59,74].…”
Section: E2 Discussion Of Ph Pipeline For Other Applicationsmentioning
confidence: 99%
“…Topological Methods in ML With the marriage of homology and ML (Hensel, Moor, and Rieck 2021;Love et al 2021;Montúfar, Otter, and Wang 2020;Hofer, Kwitt, and Niethammer 2019), it did not take long for GNNs to meet their higher dimensional counterparts in the form of simplicial (Bodnar et al 2021b;Ebli, Defferrard, and Spreemann 2020;Bunch et al 2020), cell (Hajij, Istvan, and Zamzmi 2021;Bodnar et al 2021a), hypergraph (Feng et al 2019), and sheaf (Hansen and Gebhart 2020) neural networks. Most higher dimensional extensions of GNNs aim to operate on the full complex, and redefine the convolution operation in terms of the corresponding Laplacian operator.…”
Section: Related Workmentioning
confidence: 99%
“…In [SCR + 20], the authors propose a convolutional neural network (CNN) architecture to estimate persistence images (see Section 2.2) computed on 2D-images. Similarly, in [MOW20], the authors provide an experimental overview of specific PD features (such as, e.g., their tropical coordinates [Kal19]) that can be learned using a CNN, when PDs are computed on top of 2D-images. On the other hand, RipsNet is designed to handle the (arguably harder) situation where input data are point clouds of arbitrary cardinality instead of 2D-images (i.e., vectors).…”
Section: Related Workmentioning
confidence: 99%