2018
DOI: 10.1007/978-3-030-00919-9_44
|View full text |Cite
|
Sign up to set email alerts
|

Computation of Total Kidney Volume from CT Images in Autosomal Dominant Polycystic Kidney Disease Using Multi-task 3D Convolutional Neural Networks

Abstract: Autosomal dominant polycystic kidney disease (ADPKD) characterized by progressive growth of renal cysts is the most prevalent and potentially lethal monogenic renal disease, affecting one in every 500-1000 people. Total Kidney Volume (TKV) and its growth computed from Computed Tomography images has been accepted as an essential prognostic marker for renal function loss. Due to large variation in shape and size of kidney in ADPKD, existing methods to compute TKV (i.e. to segment ADKP) including those based on 2… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
2

Relationship

2
6

Authors

Journals

citations
Cited by 21 publications
(13 citation statements)
references
References 7 publications
0
13
0
Order By: Relevance
“…Keshwani et. al, [28] similarly used CT scans to predict kidney segmentations, a multi-task 3D convolutional neural network was implemented achieving a mean Dice score of 95%. Mu et al [29], on the other hand, used MR The cross-validation folds were randomly separated into the distinct subsets.…”
Section: Discussionmentioning
confidence: 99%
“…Keshwani et. al, [28] similarly used CT scans to predict kidney segmentations, a multi-task 3D convolutional neural network was implemented achieving a mean Dice score of 95%. Mu et al [29], on the other hand, used MR The cross-validation folds were randomly separated into the distinct subsets.…”
Section: Discussionmentioning
confidence: 99%
“…The proposed architecture is similar to 3D-UNet [3], with the difference that our approach has three decoders for three different tasks instead of one. Such Multi-task architectures with shared encoder and task specific decoders have also been proposed earlier [12]. For detailed architecture, please refer to the supplementary material.…”
Section: Topnetmentioning
confidence: 99%
“…In addition, we apply a Gaussian noise with µ = 0.0 and σ = [0.0, 50.0/1536.0]. In the training iteration, bootstrapped cross entropy loss functions [6] were optimized with the Adam optimizer [7] with a learning rate of 0.001 since the multi-class dice loss can be unstable. The idea behind bootstrapping [6] is to backpropagate cross entropy loss not from all but a subset of voxels that the posterior probabilities are less than a threshold.…”
Section: Data Augmentation and Trainingmentioning
confidence: 99%