2000
DOI: 10.1117/1.602495
|View full text |Cite
|
Sign up to set email alerts
|

Synthetic aperture radar automatic target recognition with three strategies of learning and representation

Abstract: This paper describes a new architecture for synthetic aperture radar (SAR) automatic target recognition (ATR) based on the premise that the pose of the target is estimated within a high degree of precision. The advantage of our classifier design is that the input space complexity is decreased with the pose information, which enables fewer features to classify targets with a higher degree of accuracy. Moreover, the training of the classifier can be done discriminantely, which also improves performance and decre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
40
0

Year Published

2002
2002
2016
2016

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 53 publications
(42 citation statements)
references
References 36 publications
2
40
0
Order By: Relevance
“…[1][2][3] For example, Defense Advanced Research Projects Agency (DARPA) obtained SAR images of ground targets in different poses, which contain the moving and stationary target acquisition and recognition (MSTAR) public release data set, 4 which many researchers use. [5][6][7][8][9][10][11][12][13][14][15] In the majority of cases, they apply a double-stage approach for object recognition.…”
Section: Introductionmentioning
confidence: 99%
“…[1][2][3] For example, Defense Advanced Research Projects Agency (DARPA) obtained SAR images of ground targets in different poses, which contain the moving and stationary target acquisition and recognition (MSTAR) public release data set, 4 which many researchers use. [5][6][7][8][9][10][11][12][13][14][15] In the majority of cases, they apply a double-stage approach for object recognition.…”
Section: Introductionmentioning
confidence: 99%
“…The base line for the comparison is the template matching method [36]. For our approach, it can be seen that the misclassification error is 2.1% from Table 2.…”
Section: -Class Problemmentioning
confidence: 96%
“…Combining the goal of small reconstruction error with that of sparseness, the following objective function to be minimized can be arrived [5].…”
Section: Non-negative Sparse Codingmentioning
confidence: 99%
“…SVM classifiers are based on the principle of structural risk minimization [5]. Assuming a set of N training samples and labels {v i , y i } i=1,...,N , the result of training the SVM is the hyperplane decision function…”
Section: Svmmentioning
confidence: 99%
See 1 more Smart Citation