2020
DOI: 10.1101/2020.11.30.404665
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Topological Learning for Brain Networks

Abstract: This paper proposes a novel topological learning framework that can integrate networks of different sizes and topology through persistent homology. This is possible through the introduction of a new topological loss function that enables such challenging task. The use of the proposed loss function bypasses the intrinsic computational bottleneck associated with matching networks. We validate the method in extensive statistical simulations with ground truth to assess the effectiveness of the topological loss i… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
11
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(11 citation statements)
references
References 110 publications
0
11
0
Order By: Relevance
“…These types of persistent diagrams are difficult to analyse since the locations of the scatter points and the number of scatter points do not correspond across different persistent diagrams. For a 1-skeleton, there exists a more efficient 1D filtration called the graph filtration, which filters edge weights varying from −∞ to ∞ ( Chung et al, 2019 ; Songdechakraiwut & Chung, 2020b ).…”
Section: Trees In Persistent Homologymentioning
confidence: 99%
See 2 more Smart Citations
“…These types of persistent diagrams are difficult to analyse since the locations of the scatter points and the number of scatter points do not correspond across different persistent diagrams. For a 1-skeleton, there exists a more efficient 1D filtration called the graph filtration, which filters edge weights varying from −∞ to ∞ ( Chung et al, 2019 ; Songdechakraiwut & Chung, 2020b ).…”
Section: Trees In Persistent Homologymentioning
confidence: 99%
“…For convenience, we set the death value of 0-cycles to some fixed number c > w ( q −1) . Then the persistence diagram of the graph filtration is simply ( w (1) , c ), ( w (2) , c ), …, ( w ( q −1) , c ) forming 1D scatter points along the horizontal line y = c , and making various analysis and operations, including matching, significantly simplified ( Songdechakraiwut & Chung, 2020b ). Figure 1 illustrates the graph filtration and corresponding 1D scatter points in persistence diagrams on the binary tree used in Garside et al (2021) .…”
Section: Trees In Persistent Homologymentioning
confidence: 99%
See 1 more Smart Citation
“…Since the real brain networks are often affected by heterogeneity and intrinsic randomness [30, 31], it is challenging to build a coherent statistical framework to transform these topological features as quantitative measures to compare across different brain networks by averaging or matching [32]. The brain networks are inherently noisy which makes it even harder to establish similarity across networks.…”
Section: Introductionmentioning
confidence: 99%
“…We consider a random complete graph, where all the nodes are connected with its edge weights randomly drawn from a continuous distribution. The consideration of a complete graph model simplifies building graph filtration straightforward [22, 32]. We then compute the expected 0D and 1D barcodes through the order statistics [38–43].…”
Section: Introductionmentioning
confidence: 99%