2021
DOI: 10.1101/2021.09.16.21263707
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep Learning Segmentation of Glomeruli on Kidney Donor Frozen Sections

Abstract: Purpose: Recent advances in computational image analysis offer the opportunity to develop automatic quantification of histologic parameters as aid tools for practicing pathologists. This work aims to develop deep learning (DL) models to quantify non-sclerotic and sclerotic glomeruli on frozen sections from donor kidney biopsies. Approach: A total of 258 whole slide images (WSI) from cadaveric donor kidney biopsies performed at our institution (n=123) and at external institutions (n=135) were used in this study… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 59 publications
0
6
0
Order By: Relevance
“…As these works achieved superior results in the segmentation of renal structures, we also adopted U-Net as our baseline architecture. In fact, 10 out of 18 works in Table 1 use only U-Net (Jayapandian et al, 2021;Davis et al, 2021;Hermsen et al, 2019;Jha et al, 2021;Bueno et al, 2020), variations of U-Net (Gadermayr et al, 2019;Bouteldja et al, 2021) or combine it with other methods (Mei et al, 2020;Zeng et al, 2020;de Bel et al, 2018). The remaining works explore other DL-based segmentation approaches such as one DL network: Mask-RCNN (Jiang et al, 2021) and DeepLabV2 (Lutnick et al, 2019;Ginley et al, 2020); two separate DL networks: MaskRCNN and FastRCNN (Altini et al, 2020a), and SegNet and DeepLabV3+ (Altini et al, 2020b); three separate DL networks: Mask-RCNN, U-Net, and DeepLabV3 (Jha et al, 2021); a combination of two DL networks: SegNet and AlexNet (Bueno et al, 2020); and finally pipelines that combine DL approaches with conventional image processing methods (Marsh et al, 2018;Kannan et al, 2019;Ginley et al, 2019).…”
Section: State-of-the-art and Contributionsmentioning
confidence: 99%
“…As these works achieved superior results in the segmentation of renal structures, we also adopted U-Net as our baseline architecture. In fact, 10 out of 18 works in Table 1 use only U-Net (Jayapandian et al, 2021;Davis et al, 2021;Hermsen et al, 2019;Jha et al, 2021;Bueno et al, 2020), variations of U-Net (Gadermayr et al, 2019;Bouteldja et al, 2021) or combine it with other methods (Mei et al, 2020;Zeng et al, 2020;de Bel et al, 2018). The remaining works explore other DL-based segmentation approaches such as one DL network: Mask-RCNN (Jiang et al, 2021) and DeepLabV2 (Lutnick et al, 2019;Ginley et al, 2020); two separate DL networks: MaskRCNN and FastRCNN (Altini et al, 2020a), and SegNet and DeepLabV3+ (Altini et al, 2020b); three separate DL networks: Mask-RCNN, U-Net, and DeepLabV3 (Jha et al, 2021); a combination of two DL networks: SegNet and AlexNet (Bueno et al, 2020); and finally pipelines that combine DL approaches with conventional image processing methods (Marsh et al, 2018;Kannan et al, 2019;Ginley et al, 2019).…”
Section: State-of-the-art and Contributionsmentioning
confidence: 99%
“…To date, most deep-learning-based computational pathology platforms are designed to predict or assess only a bespoke set of narrowly defined clinical outcomes or visual changes (collectively known as slide-level labels). Most existing work (1)(2)(3)(4)(5) is limited to fully-supervised learning, which requires labelling large number of tissue compartments or rectangular tiles as either normal or diseased. To adapt these platforms for a different diagnosis would require additional time from pathologists to go through the entire dataset, adding further to the project's investment.…”
Section: Introductionmentioning
confidence: 99%
“…To date, most deep-learning based computational pathology platforms are designed to predict or assess only a bespoke set of narrowly defined clinical outcomes or visual changes (collectively known as slide-level labels). Most existing work (Yi et al, 2022; Hermsen et al, 2019; Marsh et al, 2018; Davis et al, 2021; Kers et al, 2022) is limited to fully-supervised learning, which requires labelling large number of tissue compartments or rectangular tiles as either normal or diseased. To adapt these platforms for a different diagnosis would require additional time from pathologists to go through the entire dataset, adding further to the project’s investment.…”
Section: Introductionmentioning
confidence: 99%
“…Treatment of artifacts is often either not mentioned or is excluded manually in most experiments. To our knowledge, the most common approach to tackle artifacts include explicitly labelling of artifacts and aggressive data augmentation (Marsh et al, 2018; Davis et al, 2021). However, these approaches would only work on objects that resemble the training data.…”
Section: Introductionmentioning
confidence: 99%