2019
DOI: 10.1016/j.radonc.2019.09.028
|View full text |Cite
|
Sign up to set email alerts
|

Synthetic MRI-aided multi-organ segmentation on male pelvic CT using cycle consistent deep attention network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
129
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
1

Relationship

7
1

Authors

Journals

citations
Cited by 118 publications
(130 citation statements)
references
References 26 publications
0
129
1
Order By: Relevance
“…Multiple variants of GAN include conditional GAN (cGan) [109], InfoGan [16], , CycleGAN [184], StarGan [19] and so on. In medical imaging, GAN has been used to perform image synthesis for inter-or intra-modality, such as MR to synthetic CT [84,89], CT to synthetic MR [27,83], CBCT to synthetic CT [58], non-attenuation correction (non-AC) PET to CT [26], low-dose PET to synthetic full-dose PET [88], non-AC PET to AC PET [28], low-dose CT to full-dose CT [159] and so on. In medical image registration, GAN is usually used to either provide additional regularization or translate multi-modal registration to unimodal registration.…”
Section: Generative Adversarial Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…Multiple variants of GAN include conditional GAN (cGan) [109], InfoGan [16], , CycleGAN [184], StarGan [19] and so on. In medical imaging, GAN has been used to perform image synthesis for inter-or intra-modality, such as MR to synthetic CT [84,89], CT to synthetic MR [27,83], CBCT to synthetic CT [58], non-attenuation correction (non-AC) PET to CT [26], low-dose PET to synthetic full-dose PET [88], non-AC PET to AC PET [28], low-dose CT to full-dose CT [159] and so on. In medical image registration, GAN is usually used to either provide additional regularization or translate multi-modal registration to unimodal registration.…”
Section: Generative Adversarial Networkmentioning
confidence: 99%
“…The field of medical image registration has been evolving rapidly with hundreds of papers published each year. Recently, DL-based methods have changed the landscape of medical image processing research and achieved the-state-of-art performances in many applications [25,27,45,58,84,85,86,88,89,97,98,156,157,158,160,161]. However, deep learning in medical image registration has not been extensively studied until the past three to four years.…”
Section: Introductionmentioning
confidence: 99%
“…Given the recent advances in machine learning techniques, automatic segmentation methods have been developed based on machine learning algorithms, especially, deep convolutional neural networks. [24][25][26][27][28][29][30][31][32][33][34] Deep learning-based catheter segmentation algorithms have been recently reported in CT, 35 MRI, 17,18 and US [36][37][38] for biopsy and brachytherapy. While these methods are inspiring, reliable catheter detection approaches have to be further developed specifically for digitizing the catheters in MRI-guided HDR prostate brachytherapy.…”
Section: Introductionmentioning
confidence: 99%
“…As is shown in Fig.2, the long skip connection was implemented by concatenating these feature maps extracted from the layer of encoding path with same sized feature maps extracted from the layer of decoding path. Attention gate (AG) could capture the most relevant semantic (segment) information without enlarging the receptive field [3]. Since in this work, the target RSP image is close to a semantic image, we propose to integrate AG into the long skip connection to highlight the semantic features from feature maps extracted from previous layer of encoding path.…”
Section: Introductionmentioning
confidence: 99%
“…Since in this work, the target RSP image is close to a semantic image, we propose to integrate AG into the long skip connection to highlight the semantic features from feature maps extracted from previous layer of encoding path. The details of the implementation of attention gate can be found in our previous work [3].…”
Section: Introductionmentioning
confidence: 99%