2022
DOI: 10.1016/j.media.2022.102485
|View full text |Cite
|
Sign up to set email alerts
|

DigestPath: A benchmark dataset with challenge review for the pathological detection and segmentation of digestive-system

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 33 publications
(14 citation statements)
references
References 41 publications
0
14
0
Order By: Relevance
“…On the latter, using only 62% of the tile-level labeled slides would have been enough to reach the 1 st position on the challenge leaderboard based on the AUC results (5 th position on the final leaderboard). On DigestPath2019 on the other hand, using 6% of the tile-level labeled slides would have reached the 5 th rank in terms of AUC on the second task of the challenge (Da et al, 2022). For both of the challenges, the best results were obtained with the help of deep neural networks trained from scratch on the challenge data, with additional post-processing steps, and sometimes using an ensemble of various heavy architectures (ensemble of networks) to reach the highest possible score.…”
Section: Discussionmentioning
confidence: 99%
“…On the latter, using only 62% of the tile-level labeled slides would have been enough to reach the 1 st position on the challenge leaderboard based on the AUC results (5 th position on the final leaderboard). On DigestPath2019 on the other hand, using 6% of the tile-level labeled slides would have reached the 5 th rank in terms of AUC on the second task of the challenge (Da et al, 2022). For both of the challenges, the best results were obtained with the help of deep neural networks trained from scratch on the challenge data, with additional post-processing steps, and sometimes using an ensemble of various heavy architectures (ensemble of networks) to reach the highest possible score.…”
Section: Discussionmentioning
confidence: 99%
“…Here, the connective tissue is a broader category that includes endothelial cells, fibroblasts and muscle cells. When creating the dataset, image regions are extracted from the following 6 sources: CRAG (Graham et al, 2019a), GlaS (Sirinukunwattana et al, 2017), CoNSeP (Graham et al, 2019c), PanNuke (Gamper et al, 2020a), DigestPath (Da et al, 2022) and TCGA. Therefore, Lizard is a diverse dataset and models trained on it may likely generalise to unseen examples.…”
Section: The Datasetsmentioning
confidence: 99%
“…The copyright holder for this preprint (which this version posted April 1, 2023. ; https://doi.org/10.1101/2023.03. 29.534834 doi: bioRxiv preprint training phase, the PLIP model generates two embedding vectors from both the text and image encoders (Figure 1e). These vectors were then optimized to be similar for each of the paired image and text vectors and dissimilar for non-paired images and texts via contrastive learning (Figure 1f, Online Methods).…”
Section: Training a Visual-language Ai Using Openpathmentioning
confidence: 99%
“…In this study, we conducted a systematic evaluation of PLIP's zero-shot capability across four external validation datasets: (i) the Kather colon dataset [24] with 9 different tissue types; (ii) the PanNuke dataset [28] with two tissue types (benign and malignant); (iii) DigestPath dataset [29] with two tissue types (benign and malignant); and (iv) WSSS4LUAD dataset [30] with two tissue types (tumor and normal) (Figure 2b, Extended Figure 2). To evaluate the PLIP model on those datasets, labels were converted to sentences.…”
Section: Plip Can Perform Zero-shot Classification On New Datamentioning
confidence: 99%
See 1 more Smart Citation