2023
DOI: 10.1117/1.jrs.17.016511
|View full text |Cite
|
Sign up to set email alerts
|

Synthetic aperture radar ship detection in complex scenes based on multifeature fusion network

Abstract: With the development of synthetic aperture radar (SAR) technology, more SAR datasets with high resolution and large scale have been obtained. Research using SAR images to detect and monitor marine targets has become one of the most important marine applications. In recent years, deep learning has been widely applied to target detection. However, it was difficult to use deep learning to train an SAR ship detection model in complex scenes. To resolve this problem, an SAR ship detection method combining YOLOv4 an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 32 publications
(48 reference statements)
0
2
0
Order By: Relevance
“…Analyzing the specific reasons, it can be seen that the Ghost-ECA and transformer blocks effectively suppress the interference of sea clutter and coherent speckle noise on the detection process and enhance the expression of effective features. Meanwhile, the SIoU loss function is used in the In order to evaluate the detection performance of the method in this paper, Table IV shows the experimental results of comparing the EGTB-Net model with seven mainstream target detection models on the SAR-Ship-Dataset, and we mainly selected Quad-FPN [18], RetinaNet [16], CY-RFB [20], YOLOv7 [38], DETR [39], YOLOv4 [40], and YOLOX-S [30] for comparison. Table IV shows that the most important metrics of the EGTB-Net model, the mAP value of 94.83% and the F1 value of 92%, are higher than the next-best CY-RFB model.…”
Section: F Comparison With the Latest Sar Ship Detection Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Analyzing the specific reasons, it can be seen that the Ghost-ECA and transformer blocks effectively suppress the interference of sea clutter and coherent speckle noise on the detection process and enhance the expression of effective features. Meanwhile, the SIoU loss function is used in the In order to evaluate the detection performance of the method in this paper, Table IV shows the experimental results of comparing the EGTB-Net model with seven mainstream target detection models on the SAR-Ship-Dataset, and we mainly selected Quad-FPN [18], RetinaNet [16], CY-RFB [20], YOLOv7 [38], DETR [39], YOLOv4 [40], and YOLOX-S [30] for comparison. Table IV shows that the most important metrics of the EGTB-Net model, the mAP value of 94.83% and the F1 value of 92%, are higher than the next-best CY-RFB model.…”
Section: F Comparison With the Latest Sar Ship Detection Methodsmentioning
confidence: 99%
“…The method designs a new lightweight backbone network (LWBackbone) and a new hybrid-domain attentional mechanism (CNAM), which better balances the model's detection accuracy and operational efficiency in complex scenarios. Zhang et al [20] proposed a ship detection network combining YOLOv4 and RFB (CY-RFB), which achieved good detection accuracy for SAR ship detection in complex scenarios. Zhao et al [21] proposed a novel attention perception field pyramid network (ARPN), which utilized the hybrid attention mechanism module (CBAM) and the receptive field module (RFB) to improve the detection performance of multi-scale ships.…”
Section: Introductionmentioning
confidence: 99%
“…Each image data tensor is individually processed initially by three branches of 3D convolution batch normalization and RELU operations. The kernel sizes for each branch vary from largest image sectors of 32×32 to medium sectors at 8×8 and pixel-pixel couples at 2×2, while at the same time convolving across spatial-temporal poses of all three channels using a 2×2 by 1 stride [4,10]. Each branch's feature tensor output is then pooled using a 3D max pooling operation, taking the largest feature tensor values across 2×2 spatial dimensions, striding 2×2 and then across 2 temporal dimensions.…”
Section: Development Of Approachmentioning
confidence: 99%
“…In Ref. [10], the authors utilized a variation of convolution kernels in three separate branches of convolution layer blocks to segment and process differing scales of image region information collectively and fuse them to enhance the spatial-resolution processing capabilities of the SAR vehicle detection network.…”
Section: Introductionmentioning
confidence: 99%