1993
DOI: 10.1109/7.249109
|View full text |Cite
|
Sign up to set email alerts
|

A neural network approach to pulse radar detection

Abstract: A .-approach using a multilayer feedforward neural network to pulse co .... resslon Is presented. The 13-element Barker code and the maximum-length sequences (m-sequeuees) with length'llS, 31, a nd 63 bits were used as the signal codes, and four networks were Implemented, respectively. In each of these networks, the nwmer or luput units was the same as the signal length wblle tbe nuder or hidden units was three and t he nuder of output unit was one. In tralning each of these networks, the backpropagatlon learn… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
23
0

Year Published

2001
2001
2020
2020

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 47 publications
(23 citation statements)
references
References 5 publications
0
23
0
Order By: Relevance
“…A novel approach of using a multilayer feedforward network was proposed by Kwan and Lee in [4] which resulted in robust performance compared to the previously mentioned techniques. The method used for training the network was Backpropagation (BP) algorithm and the signal code used was Barker code which has the sequence [1, 1, 1, 1, 1, −1, −1, 1, 1, −1, 1, −1, 1].…”
Section: Introductionmentioning
confidence: 98%
“…A novel approach of using a multilayer feedforward network was proposed by Kwan and Lee in [4] which resulted in robust performance compared to the previously mentioned techniques. The method used for training the network was Backpropagation (BP) algorithm and the signal code used was Barker code which has the sequence [1, 1, 1, 1, 1, −1, −1, 1, 1, −1, 1, −1, 1].…”
Section: Introductionmentioning
confidence: 98%
“…If the network does not reach a solution, more hidden nodes may be needed; if it does, we can try to eliminate a node. On the other hand, a large number of neurons may imply that the network is simply memorizing training sets, which means that there is very little generalisation and increased computing time [14]. -Training data: We could use all available patterns to train the red, but this is neither necessary nor recommendable.…”
Section: Results Of the Annsmentioning
confidence: 99%
“…This property was first exploited in [5], and subsequently by several researchers [6]- [10]. The objective in these papers is to make the ANN approximate an ideal autocorrelation sequence; i.e., peak when the autocorrelation lag is zero, and zero elsewhere.…”
Section: Introductionmentioning
confidence: 99%