2023
DOI: 10.1109/jbhi.2022.3216887
|View full text |Cite
|
Sign up to set email alerts
|

Transformer With Double Enhancement for Low-Dose CT Denoising

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 17 publications
(11 citation statements)
references
References 24 publications
0
10
0
Order By: Relevance
“…Additionally, two studies adopt Transformer-based approaches. 150,151 with the remaining approaches including filter-based methods as well as hybrid methods. [152][153][154][155][156][157] To be noted that some models are originally developed, 61,62,65,66,68,70,72,73,77,122,153 while some are developed by modifying original models through modifying loss functions, or layers, or extending original models to different domains.…”
Section: Dl-based Noise Reduction Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…Additionally, two studies adopt Transformer-based approaches. 150,151 with the remaining approaches including filter-based methods as well as hybrid methods. [152][153][154][155][156][157] To be noted that some models are originally developed, 61,62,65,66,68,70,72,73,77,122,153 while some are developed by modifying original models through modifying loss functions, or layers, or extending original models to different domains.…”
Section: Dl-based Noise Reduction Methodsmentioning
confidence: 99%
“…The model's built-in self -attention mechanism enables it to non-linearly record relationships between segments of the input sequence, making it a good choice for processing high-dimensional CT images. 150 The Transformer model has also shown promise in other fields, such as image processing, and has reached state-of -the-art performance in a variety of natural language processing applications. Additionally, the Transformer model can process each part of the image separately, making it more efficient for handling large images compared to typical convolutional neural networks.…”
Section: Transformer-based Methodsmentioning
confidence: 99%
See 3 more Smart Citations