2021
DOI: 10.1007/978-3-030-87193-2_60
|View full text |Cite
|
Sign up to set email alerts
|

CCBANet: Cascading Context and Balancing Attention for Polyp Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 49 publications
(29 citation statements)
references
References 16 publications
0
29
0
Order By: Relevance
“…In order to prove the effectiveness of the GCLDNet method, we selected some SOTA networks for comparison, including FCN ( 35 ), U-Net ( 36 ), UNet++ ( 37 ), FPN ( 38 ), PSPNet ( 39 ), SegNet ( 40 ), LinkNet ( 41 ), DeepLabV3 ( 42 ), MultiResUNet ( 43 ), CCBANet ( 44 ), U2net ( 45 ), and UNet3+ ( 46 ). To ensure the fairness of comparison process, comparative methods also adopt the same preprocessing method as GCLDNet and the parameters are adjusted to the optimal state.…”
Section: Resultsmentioning
confidence: 99%
“…In order to prove the effectiveness of the GCLDNet method, we selected some SOTA networks for comparison, including FCN ( 35 ), U-Net ( 36 ), UNet++ ( 37 ), FPN ( 38 ), PSPNet ( 39 ), SegNet ( 40 ), LinkNet ( 41 ), DeepLabV3 ( 42 ), MultiResUNet ( 43 ), CCBANet ( 44 ), U2net ( 45 ), and UNet3+ ( 46 ). To ensure the fairness of comparison process, comparative methods also adopt the same preprocessing method as GCLDNet and the parameters are adjusted to the optimal state.…”
Section: Resultsmentioning
confidence: 99%
“…As the newly-proposed methods, SANet [41] and MSNet [42] design the shallow attention module and subtraction unit, respectively, to achieve precise and efficient segmentation. Additionally, several works opt for introducing additional constraints via three main-stream manners: exerting explicit boundary supervision [43][44][45][46][47] , introducing implicit boundary-aware representation [48][49][50] , and exploring uncertainty for ambiguous regions [51] . 2) Transformer-based approaches.…”
Section: Image Polyp Segmentation (Ips)mentioning
confidence: 99%
“…The dependency will be calculated between the anchor frame and the sampled consecutive frames within a sliding window. Following PraNet [48] , we use the same backbone, Res2Net-50 [65] , to extract the feature in the layer. To alleviate the computational burden, we adopt an RFB-like [66] module to reduce the channel dimension of the extracted feature and generate the anchor feature .…”
Section: Global Encodermentioning
confidence: 99%
See 1 more Smart Citation
“…Nguyen et al, 2021] 99.02 70.68 79.77 95.82 83.68 88.31 ICGNet(Ours) 99.15 74.97 82.64 96.56 85.92 91.40…”
mentioning
confidence: 99%