2020
DOI: 10.1007/978-3-030-59725-2_28
|View full text |Cite
|
Sign up to set email alerts
|

PolypSeg: An Efficient Context-Aware Network for Polyp Segmentation from Colonoscopy Videos

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 26 publications
(10 citation statements)
references
References 16 publications
0
10
0
Order By: Relevance
“…In recent years, attention has also been widely used in the field of computer vision, especially for semantic segmentation which requires detailed edge information at the pixel level. Examples include PraNet [13], PolypSeg [35] and ABC-Net [14]. After adding different context modules, they all get good results in medical imaging segmentation.…”
Section: Related Workmentioning
confidence: 99%
“…In recent years, attention has also been widely used in the field of computer vision, especially for semantic segmentation which requires detailed edge information at the pixel level. Examples include PraNet [13], PolypSeg [35] and ABC-Net [14]. After adding different context modules, they all get good results in medical imaging segmentation.…”
Section: Related Workmentioning
confidence: 99%
“…Later, Akbari et al [33] introduced a modified FCN to improve the segmentation accuracy. Inspired by the vast success of UNet [34] in biomedical image segmentation, UNet++ [35] and ResUNet [36] were employed for polyp segmentation for improved performance. Furthermore, PolypSeg [37] , ACS [38] , ColonSegNet [39] , and SCR-Net [40] explore the effectiveness of UNet-enhanced architecture on adaptively learning semantic contexts.…”
Section: Image Polyp Segmentation (Ips)mentioning
confidence: 99%
“…The literature also revealed that majority of the existing works evaluated their models on test sets, which were derived from the same datasets. [28][29][30][31] An exception to this trend was the recently published work of Jha et al 5 They performed inter-dataset evaluation to prove the generalizing capability of their proposed model. However, they performed one-fold cross validation for all their experiments.…”
Section: Related Workmentioning
confidence: 99%