2022 7th International Conference on Signal and Image Processing (ICSIP) 2022
DOI: 10.1109/icsip55141.2022.9886148
|View full text |Cite
|
Sign up to set email alerts
|

Depth Swin Transformer Unet for Serial Section Biomedical Image Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 19 publications
0
2
0
Order By: Relevance
“…It has proven its effectiveness in handling complex image segmentation tasks. It is essential to highlight that our framework is highly flexible, enabling designers to freely choose different backbones, such as Res-Unet3D (Li et al 2022) and TransBTS (Lin et al 2022), among others. This adaptability further enhances the versatility and applicability of our approach.…”
Section: Backbone Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…It has proven its effectiveness in handling complex image segmentation tasks. It is essential to highlight that our framework is highly flexible, enabling designers to freely choose different backbones, such as Res-Unet3D (Li et al 2022) and TransBTS (Lin et al 2022), among others. This adaptability further enhances the versatility and applicability of our approach.…”
Section: Backbone Networkmentioning
confidence: 99%
“…, HIVE-Net(Yuan et al 2021), Zhili(Li et al 2021), Peng and Yuan(Peng and Yuan 2019), Xiao(Xiao et al 2018), Res-Unet3D(Li et al 2022), HIVE-Net(Yuan et al 2021), and nnU-Net(Isensee et al 2021). For Transformer-based methods, we choose DSTUnet(Lin et al 2022), TransBTS(Wang et al 2021), UNETR(Hatamizadeh et al 2022b), nnFormer(Zhou et al 2021), SwinUNETR(Hatamizadeh et al 2022a), and 3D UX-Net(Lee et al 2023). The λ 1 and λ 2 in the loss function are set to be 1 and 0.5 following(Zou et al 2022).…”
mentioning
confidence: 99%