2017
DOI: 10.3390/rs9080860
|View full text |Cite
|
Sign up to set email alerts
|

Contextual Region-Based Convolutional Neural Network with Multilayer Fusion for SAR Ship Detection

Abstract: Synthetic aperture radar (SAR) ship detection has been playing an increasingly essential role in marine monitoring in recent years. The lack of detailed information about ships in wide swath SAR imagery poses difficulty for traditional methods in exploring effective features for ship discrimination. Being capable of feature representation, deep neural networks have achieved dramatic progress in object detection recently. However, most of them suffer from the missing detection of small-sized targets, which mean… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
128
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 270 publications
(131 citation statements)
references
References 28 publications
0
128
0
Order By: Relevance
“…Finally, the performance evaluation on real-scene data is also presented. We compare the proposed method with the iterative censoring CFAR (IC-CFAR) detector [10], Variational Bayesian Inference (VBI) [31], Superpixel-level local information measurement (SLIM) detector [32], and contextual region-based convolutional neural network with multilayer fusion (CRCNN-MF) [15]. The ground truth of the testing areas R1 and R3 are shown with Pauli vector color-coded in Figure 4.…”
Section: Experiments and Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, the performance evaluation on real-scene data is also presented. We compare the proposed method with the iterative censoring CFAR (IC-CFAR) detector [10], Variational Bayesian Inference (VBI) [31], Superpixel-level local information measurement (SLIM) detector [32], and contextual region-based convolutional neural network with multilayer fusion (CRCNN-MF) [15]. The ground truth of the testing areas R1 and R3 are shown with Pauli vector color-coded in Figure 4.…”
Section: Experiments and Discussionmentioning
confidence: 99%
“…Thus, an index matrix can be obtained to label whether each pixel of the image is a potential target pixel or not. More parameter setting details can be found in [9,15,31,32].…”
Section: Parameter Settingmentioning
confidence: 99%
“…AP, PRC and mAP are three well-known and widely applied indicators to evaluate the performance of object detection methods [34]. PRC can be obtained through four evaluation components: true positive (TP), false positive (FP), false negative (FN) and true negative (TN) [35]. TP and FP indicate the number of targets detected correctly and the number of targets detected incorrectly, respectively.…”
Section: Evaluation Indicatorsmentioning
confidence: 99%
“…The possibilities for supplementing the system with appropriate methods to support maneuvering decisions taken in an uncertain navigational situation that occurs in a short time in relation to the greater number of objects encountered are described in [7][8][9][10][11][12]. The process of reducing the uncertainty when assessing the real navigational situation of an object by using an artificial neural network is shown in [13][14][15]. Lenart [16] proposed the parameter "time for a safe distance" after detecting dangerous objects as a potentially important parameter, accompanied by the display of possible evasive maneuvers.…”
Section: Introductionmentioning
confidence: 99%