2023
DOI: 10.1109/access.2023.3272055
|View full text |Cite
|
Sign up to set email alerts
|

DenseTrans: Multimodal Brain Tumor Segmentation Using Swin Transformer

Abstract: Aiming at the task of automatic brain tumor segmentation, this paper proposes a new DenseTrans network. In order to alleviate the problem that convolutional neural networks(CNN) cannot establish long-distance dependence and obtain global context information, swin transformer is introduced into UNet++ network, and local feature information is extracted by convolutional layer in UNet++. then, in the high resolution layer, shift window operation of swin transformer is utilized and self-attention learning windows … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 42 publications
0
6
0
Order By: Relevance
“…This section discusses, the analysis of outcomes accomplished from various simulation experiment for BT segmentation and OSP. The existing methods such as ResUNet+, 53 radiomics based automatic framework (RBAF), 35 DCNN, 54 encoder–decoder method with depthwise atrous spatial pyramid pooling Network (EDD‐Net), 55 Weight loss function and Dropout U‐Net with convNeXt block (WD‐UNeXt), 56 focal cross transformer, 57 Attention‐based Multimodal Glioma Segmentation (AMMGS), 58 Segmentation based on Transformer and U2‐Net (STrans‐U2Net), 59 deep Residual U‐Net (dRes U‐Net), 60 and DenseTransformer (DenseTrans) 61 are compared with the introduced approach based on DSC, Sy, Sp and Hausdorff 95 in Table 6.…”
Section: Experimental Outcomesmentioning
confidence: 99%
See 1 more Smart Citation
“…This section discusses, the analysis of outcomes accomplished from various simulation experiment for BT segmentation and OSP. The existing methods such as ResUNet+, 53 radiomics based automatic framework (RBAF), 35 DCNN, 54 encoder–decoder method with depthwise atrous spatial pyramid pooling Network (EDD‐Net), 55 Weight loss function and Dropout U‐Net with convNeXt block (WD‐UNeXt), 56 focal cross transformer, 57 Attention‐based Multimodal Glioma Segmentation (AMMGS), 58 Segmentation based on Transformer and U2‐Net (STrans‐U2Net), 59 deep Residual U‐Net (dRes U‐Net), 60 and DenseTransformer (DenseTrans) 61 are compared with the introduced approach based on DSC, Sy, Sp and Hausdorff 95 in Table 6.…”
Section: Experimental Outcomesmentioning
confidence: 99%
“…Among the two performance measures DSC and Sy, the existing TransU2‐Net method 60 achieves higher DSC of 92% for WT and lower Hausdorff 95 of 0.29. The implemented dResU‐Net method 61 accomplishes 99% of Sp score for WT. The existing DenseTrans algorithm attains maximum DSC score of 93% for WT.…”
Section: Experimental Outcomesmentioning
confidence: 99%
“…The encoder of VT-UNet calculated both local and global attention, while its decoder adopted parallel self-attention and cross-attention to capture and optimize boundary details. Li et al [135] embedded a Transformer into UNetþþ [136] and used a dense feature extractor to model global feature information and remote dependencies. The Ushaped structure retains the advantage of a symmetry model, enabling the feature maps to establish a one-to-one matching relationship from shallow to deep layers.…”
Section: Brain Tumor Segmentationmentioning
confidence: 99%
“…The encoder of VT‐UNet calculated both local and global attention, while its decoder adopted parallel self‐attention and cross‐attention to capture and optimize boundary details. Li et al [135] . embedded a Transformer into UNet++ [136] and used a dense feature extractor to model global feature information and remote dependencies.…”
Section: Transformers In Brain Sciencesmentioning
confidence: 99%
“…Consequently, there is a heightened acquisition of worldwide characteristics, significantly augmenting the accuracy of brain tumor identification in imaging investigations. Zong et al [7] conducted a prominent study that emphasized the effectiveness of mega-convolution in the context of specific image datasets. The study showcased the model's impressive performance, achieving a top-1 accuracy of 87.8% and a mIoU measure of 56% on the ADE20K dataset.…”
Section: Introductionmentioning
confidence: 99%