2022 IEEE 2nd International Conference on Software Engineering and Artificial Intelligence (SEAI) 2022
DOI: 10.1109/seai55746.2022.9832144
|View full text |Cite
|
Sign up to set email alerts
|

Spine X-ray Image Segmentation Based on Transformer and Adaptive Optimized Postprocessing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(5 citation statements)
references
References 8 publications
0
5
0
Order By: Relevance
“…In our study, we trained and compared a total of 15 models on our lumbar spine MRI dataset. These models are Attention U-Net (Oktay et al, 2018), HSNet (Zhang et al, 2022), Inception-SwinUnet (Pu et al, 2023), MedT (Valanarasu et al, 2021), MultiResUNet (Ibtehaz and Rahman, 2020), SLT-Net (Feng et al, 2022), Swin-Unet (Cao et al, 2023), UNETR (Hatamizadeh et al, 2021), Swin UNETR (Hatamizadeh et al, 2022), TransUNet (Chen et al, 2021), UCTransNet (Wang et al, 2022), UNet++ (Zhou et al, 2018), UNeXt (Valanarasu and Patel, 2022), UTNet (Gao et al, 2021), and BianqueNet (Zheng et al, 2022). We were unable to test all models mentioned in Section 2 due to either unavailability (e.g., no source code) or incompatibility (e.g., size not matching).…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…In our study, we trained and compared a total of 15 models on our lumbar spine MRI dataset. These models are Attention U-Net (Oktay et al, 2018), HSNet (Zhang et al, 2022), Inception-SwinUnet (Pu et al, 2023), MedT (Valanarasu et al, 2021), MultiResUNet (Ibtehaz and Rahman, 2020), SLT-Net (Feng et al, 2022), Swin-Unet (Cao et al, 2023), UNETR (Hatamizadeh et al, 2021), Swin UNETR (Hatamizadeh et al, 2022), TransUNet (Chen et al, 2021), UCTransNet (Wang et al, 2022), UNet++ (Zhou et al, 2018), UNeXt (Valanarasu and Patel, 2022), UTNet (Gao et al, 2021), and BianqueNet (Zheng et al, 2022). We were unable to test all models mentioned in Section 2 due to either unavailability (e.g., no source code) or incompatibility (e.g., size not matching).…”
Section: Methodsmentioning
confidence: 99%
“…Medical Transformer (MedT) uses gated-axial Transformer layers in the encoder of a Unet (Valanarasu et al, 2021). HSNet used PVTv2 (Wang et al, 2022) as encoder and a dual-branch structure which Transformer branch and CNN branch fused by element-wise product as decoder for polyp segmentation (Zhang et al, 2022).…”
Section: Review Of Dnn Models For Lumbar Image Segmentationmentioning
confidence: 99%
See 3 more Smart Citations