2023
DOI: 10.1016/j.neucom.2022.11.062
|View full text |Cite
|
Sign up to set email alerts
|

Feature pyramid network with multi-scale prediction fusion for real-time semantic segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(19 citation statements)
references
References 19 publications
0
14
0
Order By: Relevance
“…In Refs. 24 and 25, after connecting to multiscale resolution images, the image information is fused by a fusion module with an attention mechanism. The experiments all demonstrate the importance of a multiscale resolution feature fusion module with an attention mechanism for image feature fusion.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In Refs. 24 and 25, after connecting to multiscale resolution images, the image information is fused by a fusion module with an attention mechanism. The experiments all demonstrate the importance of a multiscale resolution feature fusion module with an attention mechanism for image feature fusion.…”
Section: Related Workmentioning
confidence: 99%
“…However, the absence of multiscale image input limited the author to attain only a Dice score of 0.9421. Van Quyen and Kim 54 proposed a dual prediction method to effectively capture both thin and large objects within complex street scenes. Employing a pyramid-like approach, this method fell short in achieving a high score, as the target environment did not involve medical images, resulting in a final score of only 0.9233.…”
Section: Experiments Data and Preprocessingmentioning
confidence: 99%
“…Some algorithms determine the location region by the contextual information [ 45 ] of targets [ 46 ]. Lin et al [ 47 ] introduced a multi-scale fusion strategy of a feature pyramid network (FPN) [ 48 ] to extract and fuse features at different scales, and obtained deep semantic information and shallow position information. Chen et al [ 49 ] considered different feature extraction methods based on depth and shallow features to improve the detection effect of small targets.…”
Section: Related Workmentioning
confidence: 99%
“…To deal with that, multi-scale learning is used, as a strategy to effectively integrate two types of feature information and perform better semantic representation. According to [ 43 , 47 ], it is appropriate to use separate groups of features to model distinct factors. One concern is that the shallow feature information required by small target detection can be easily diluted in the extraction process.…”
Section: St-centernetmentioning
confidence: 99%
“…Several intrinsic difficulties hinder the accurate segmentation of tiny objects, including information loss, noisy feature representation, inadequate samples and sensitive to perturbation [35]. To overcome these limitations, extensive efforts have been made from various perspectives [36], such as data augmentation [37], soft label assignment [38], scalespecific segmentation [39], feature reassembly [40], attentionbased segmentation [41], similarity-aware learning [42], superresolution-based segmentation [43], context-aware modeling [44], focus-aware segmentation [16], etc. The strategies shown here are not used separately, but in most cases, they are combined together to achieve better performance.…”
Section: Related Workmentioning
confidence: 99%