2022
DOI: 10.1016/j.media.2021.102313
|View full text |Cite
|
Sign up to set email alerts
|

ResGANet: Residual group attention network for medical image classification and segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 116 publications
(33 citation statements)
references
References 17 publications
0
33
0
Order By: Relevance
“…Table 3 shows that ResNet101 backbone achieves slightly better results in classifying the four tissues. Experimental results show that, regardless of whether it is DenseNet or ResNet, a deeper network indicates better analysis results, which has also been verified in other image classification or segmentation tasks Khened et al ( 51 ) Cheng et al ( 52 ).…”
Section: Experiments and Resultsmentioning
confidence: 54%
“…Table 3 shows that ResNet101 backbone achieves slightly better results in classifying the four tissues. Experimental results show that, regardless of whether it is DenseNet or ResNet, a deeper network indicates better analysis results, which has also been verified in other image classification or segmentation tasks Khened et al ( 51 ) Cheng et al ( 52 ).…”
Section: Experiments and Resultsmentioning
confidence: 54%
“…Most downstream applications still use ResNet and its variants as the backbone network. Cheng et al [ 30 ] proposed a modular group attention block that captures feature dependencies in medical images in both channel and spatial dimensions and stacked these group attention blocks in the ResNet style to improve model classification performance. Extensive experiments by Rathore et al [ 31 ] on ADNI [ 32 ] dataset showed that the DenseNet model improved classification accuracy by about 9% compared to traditional machine learning, which proved the usefulness of the DenseNet model.…”
Section: Related Workmentioning
confidence: 99%
“…We compare the proposed F2RNet with eight state-of-theart methods (U-Net [1], U-Net++ [3], ResUNet [5], R2UNet [16], BiONet [15], ResGANet [21], TransUNet [8] and Swi-nUNet [22]) on three datasets. We adopt the two most commonly used evaluation metrics in semantic segmentation (i.e., IoU and Dice) to evaluate above methods.…”
Section: Comparison With State-of-the-artsmentioning
confidence: 99%