2020
DOI: 10.1109/tci.2020.3012928
|View full text |Cite
|
Sign up to set email alerts
|

CaGAN: A Cycle-Consistent Generative Adversarial Network With Attention for Low-Dose CT Imaging

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
39
0
2

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 61 publications
(41 citation statements)
references
References 61 publications
0
39
0
2
Order By: Relevance
“…Since CT images are single-channel grayscale images, the training database may not contain many valid images. One of the most common experimental strategies in LDCT noise removal tasks is to generate overlapping patches (32), which better represent the local characteristics of the image and increase the number of training samples (56). Considering the limited number of CT images that could meet the requirements of our experiment, we additionally adopted the overlapping patch strategy, which not only considers spatial interconnection between patches but also significantly accelerates the convergence of the learning model (52) Net algorithm are designed for natural images, the training process is more inclined to learn the characteristic information of natural images, and the experimental effect on grayscale images is not good.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Since CT images are single-channel grayscale images, the training database may not contain many valid images. One of the most common experimental strategies in LDCT noise removal tasks is to generate overlapping patches (32), which better represent the local characteristics of the image and increase the number of training samples (56). Considering the limited number of CT images that could meet the requirements of our experiment, we additionally adopted the overlapping patch strategy, which not only considers spatial interconnection between patches but also significantly accelerates the convergence of the learning model (52) Net algorithm are designed for natural images, the training process is more inclined to learn the characteristic information of natural images, and the experimental effect on grayscale images is not good.…”
Section: Discussionmentioning
confidence: 99%
“…Expanding convolution in the image domain has a sizeable sensory field that better captures correlations between anatomical regions and synthesizes lost anatomical information in the event of high signal distortion. Inspired by the successful computer vision application, Huang et al (32) introduced the attention mechanism into the cycle-consistent generative adversarial network's (CycleGAN) generator for LDCT denoise and achieved satisfactory results. In addition, Ataei et al (33) cascaded 2 identical neural networks to recreate fine structural details in low-contrast areas by minimizing perceived loss.…”
Section: Introductionmentioning
confidence: 99%
“…Meanwhile, the attention mechanism in GAN facilitates GAN to generate images that fulfill the clinical medical standards. Previous studies have shown [29][30][31][32][33] that channel-wise attention in GAN can be used to reconstruct more details in MR images than other methods without channel-wise attention. However, the spatial attention has been neglected by these approaches.…”
Section: E Spatial and Channel-wise Attentionmentioning
confidence: 99%
“…거기에 더해 배치 정규화 (batch normalization)[11] 이 주어진 모델의 성능을 더욱 신장시킬 수 있음을 확인했다. Ca-GAN[7] 에서는 생성적 적대 신경망 (generative adversarial network (GAN)) 을…”
unclassified
“…또한 Table1에서와같이 합성 데이터 실험인 case 1과 2의 경우 4번의 교차검증 역시 같은 표본들로 검증했 다. 비교 방법은 지도 학습 기반인 DnCNN[6], RED-CNN[27],CaGAN[7]을 선택했다. 비지도 학습으로서는 본 연구의 기초가 되는 Quan et al[5] ISCL[1]과 최신 짝지어지지 않는 영상 기반 노이즈 제거 방법인 UIDNet[28], ADN[17]을 선택했다.…”
unclassified