2021
DOI: 10.48550/arxiv.2108.07368
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

CaraNet: Context Axial Reverse Attention Network for Segmentation of Small Medical Objects

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(17 citation statements)
references
References 0 publications
0
17
0
Order By: Relevance
“…Dice mIoU SFFormer-L [25] 0.9357 0.8905 PraNet [34] 0.898 0.849 TransFuse-L [26] 0.918 0.868 CaraNet [35] 0.918 0.865 ResUNet++ [17] 0.8133 0.793 U-Net++ [37] 0.821 0.722 U-Net [14] 0.818 0.742 PRAPNet 0.942 0.906 Table 4. Evaluation results of different models using the CVC-ClinicDB datase.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Dice mIoU SFFormer-L [25] 0.9357 0.8905 PraNet [34] 0.898 0.849 TransFuse-L [26] 0.918 0.868 CaraNet [35] 0.918 0.865 ResUNet++ [17] 0.8133 0.793 U-Net++ [37] 0.821 0.722 U-Net [14] 0.818 0.742 PRAPNet 0.942 0.906 Table 4. Evaluation results of different models using the CVC-ClinicDB datase.…”
Section: Methodsmentioning
confidence: 99%
“…The attention mechanism [24] focuses on a subset of its input and it is widely used in the application of natural language translations. In recent years, it has been used for semantic segmentation tasks [35,36] such as pixel prediction. The attention mechanism helps neural networks to determine which parts of the network need more attention.…”
Section: Attention Unitsmentioning
confidence: 99%
“…Inspired by CaraNet [37], we use Channel-wise Feature Pyramid (CFP) to extract features from the encoder in multiscale views. As depicted in Fig.…”
Section: Refinement Modulementioning
confidence: 99%
“…CaraNet [37] also enhanced the Reverse Attention module by an axial attention block, which is a straightforward generalization of self-attention that naturally aligns with the multiple dimensions of the tensors. This module is supposed to filter the necessary information for the refinement process.…”
Section: Refinement Modulementioning
confidence: 99%
“…Many image segmentation models based on the Convolutional Neural Networks (CNN) recently achieved excellent learning ability in several polyp segmentation benchmarks. [5,1,2,10,4] However, due to the top-down modelling method of the CNN model and the variability in the morphology of polyps but relatively simple structure of the polyps image, this model lacks generalisation ability and is difficult to to process unseen ⋆ Contributed Equally datasets. To improve the generalisation ability of the deep learning model, we shall incorporate the Transformer architecture into the polyp segmentation task.…”
Section: Introductionmentioning
confidence: 99%