2022
DOI: 10.1016/j.media.2021.102264
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical graph representations in digital pathology

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
49
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 86 publications
(50 citation statements)
references
References 60 publications
(79 reference statements)
0
49
0
Order By: Relevance
“…CNN's architectures tend to favor texture-based features [12], at the other end of the spectrum are graph neural network-based methods, which model the global dependencies between the local representations and thus rely more on the shape cues. We further compare with graph-based methods with a significant emphasis on HACT-Net [26], which holds the current state-of-art for TRoIs classification on the BRACS. ScoreNet reaches a new state-of-the-art weighted F1-score of 64.4% on the BRACS TRoIs classification task outperforming HACT- [22,29] 32.3 ± 4.6 39.0 ± 0.8 23.7 ± 1.7 18.0 ± 0.8 37.7 ± 2.9 47.3 ± 2.0 70.7 ± 0.5 39.4 ± 1.9 CNN (10× + 20×) [22,29] 48.3 ± 2.0 45.7 ± 0.5 41.7 ± 5.0 32.3 ± 0.9 46.3 ± 1.4 59.3 ± 2.0 85.7 ± 1.9 52.3 ± 1.9 CNN (10× + 20× + 40×) [22,29] [24] 58.8 ± 6.8 40.9 ± 3.0 46.8 ± 1.9 40.0 ± 3.6 63.7 ± 10.5 53.8 ± 3.9 81.1 ± 3.3 55.9 ± 1.0 CG-GNN [24] 63.6 ± 4.9 47.7 ± 2.9 39.4 ± 4.…”
Section: Trois Classification Results and Discussionmentioning
confidence: 99%
See 4 more Smart Citations
“…CNN's architectures tend to favor texture-based features [12], at the other end of the spectrum are graph neural network-based methods, which model the global dependencies between the local representations and thus rely more on the shape cues. We further compare with graph-based methods with a significant emphasis on HACT-Net [26], which holds the current state-of-art for TRoIs classification on the BRACS. ScoreNet reaches a new state-of-the-art weighted F1-score of 64.4% on the BRACS TRoIs classification task outperforming HACT- [22,29] 32.3 ± 4.6 39.0 ± 0.8 23.7 ± 1.7 18.0 ± 0.8 37.7 ± 2.9 47.3 ± 2.0 70.7 ± 0.5 39.4 ± 1.9 CNN (10× + 20×) [22,29] 48.3 ± 2.0 45.7 ± 0.5 41.7 ± 5.0 32.3 ± 0.9 46.3 ± 1.4 59.3 ± 2.0 85.7 ± 1.9 52.3 ± 1.9 CNN (10× + 20× + 40×) [22,29] [24] 58.8 ± 6.8 40.9 ± 3.0 46.8 ± 1.9 40.0 ± 3.6 63.7 ± 10.5 53.8 ± 3.9 81.1 ± 3.3 55.9 ± 1.0 CG-GNN [24] 63.6 ± 4.9 47.7 ± 2.9 39.4 ± 4.…”
Section: Trois Classification Results and Discussionmentioning
confidence: 99%
“…To evaluate all samples, we per- form stratified 5-fold cross-validation. For HACT-Net, we use the available pre-trained weights and follow the code implementation of [26]. As HACT-Net sometimes fails to generate embedding, to have a fair comparison, we only evaluate those samples where HACT-Net could successfully produce embedding (around 95% of the BACH and 80% of CAMELYON16 dataset).…”
Section: Trois Classification Results and Discussionmentioning
confidence: 99%
See 3 more Smart Citations