1995
DOI: 10.1109/8.475924
|View full text |Cite
|
Sign up to set email alerts
|

Direction finding in phased arrays with a neural network beamformer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
46
0

Year Published

1999
1999
2017
2017

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 125 publications
(47 citation statements)
references
References 5 publications
0
46
0
Order By: Relevance
“…wherew 1 , w 2 , and w n are the complex weights and in proportional to the complex current phasorsI 1 , I 2 , and I n, respectively. To achieve the objective of forming a resultant single beam, the value of the complex weights w 1 , w 2 , and w n needs to be optimized such that the resultant field must matched to a desired single beam function    f .…”
Section: Adaptive Array Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…wherew 1 , w 2 , and w n are the complex weights and in proportional to the complex current phasorsI 1 , I 2 , and I n, respectively. To achieve the objective of forming a resultant single beam, the value of the complex weights w 1 , w 2 , and w n needs to be optimized such that the resultant field must matched to a desired single beam function    f .…”
Section: Adaptive Array Modelmentioning
confidence: 99%
“…Neural networks are used in adaptive antenna signal processing [1]- [2] because of their general purpose nature, fast convergent rate and large scale integration implementations. The goal of neural network training is to minimize the difference between output data and the target data.…”
mentioning
confidence: 99%
“…Neural Network approaches have also been used in beamforming [33][34][35][36][37][38][39]. Zaman et al utilized a GA hybridized with a pattern search for DOA analysis [40].…”
Section: Beamformingmentioning
confidence: 99%
“…Constant is a scalar bias that needs to be added since the nonlinear estimator may be biased. Parameter vector always lies in the subspace spanned by the training data and, according to the representer theorem [37], it can be constructed as a linear combination of the given data (8) where are the training data pairs. Then, estimator (7) can be rewritten as (9) In this context, 's are called primal parameters, and 's are the dual ones.…”
Section: Estimators In Reproducing Kernel Hilbert Subspacesmentioning
confidence: 99%
“…This feature decreases estimation errors or bit error rates. Neural networks [4] have been proposed for beamforming (e.g., [5]- [7]) and direction of arrival estimation (e.g., [8], [9]) among other array processing tasks. A com-A.…”
mentioning
confidence: 99%