1997
DOI: 10.1109/74.646800
|View full text |Cite
|
Sign up to set email alerts
|

The analogy between the Butler matrix and the neural-network direction-finding array

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2000
2000
2017
2017

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 1 publication
0
7
0
Order By: Relevance
“…The antenna consists of the feeding line and the transition and the radiating structures. In general, the radiated structure is exponentially or elliptically tapered, which means that they are composed of two layers structure [10, 17]. …”
Section: Antipodal Vivaldi Antennamentioning
confidence: 99%
“…The antenna consists of the feeding line and the transition and the radiating structures. In general, the radiated structure is exponentially or elliptically tapered, which means that they are composed of two layers structure [10, 17]. …”
Section: Antipodal Vivaldi Antennamentioning
confidence: 99%
“…Nevertheless, CVANNs have some advantages with respect to conventional ANNs, as the number of layers needed in each case: in real valued ANNs (RVANNs) it is often required the use of one or more hidden layers, mainly when the problem to solve is nonlinear [1] such as those related to radiation of antenna arrays [2][3][4]8], whereas CVANNs require a reduced number of layers (for example, no hidden layers are used in the examples presented here). Some features related to nonlinearity are avoided when using CVANNs [5,6].…”
Section: Advantages and Drawbacks Of Cvanns Applied To The Present Casesmentioning
confidence: 99%
“…A number of additional features were further simplified in the example presented in this work, such as the computational time of the training process, which was reduced to a matrix (pseudo) inversion. The simplification of the transfer functions was also advantageous: the ones used in RVANNs are related to exponential, hyperbolic tangential or sigmoid functions for hidden layers (that is, those layers inserted between the first and last layers of the neural network, often called input and output layers, respectively) [1][2][3][4]8], whose selections always imply additional computational effort; the CVANNs presented in this work use the identity as the only transfer function. Finally, the number of neurons in the CVANN presented here is equal to the number of outputs; therefore, no trial-and-error processes of selection of their architectures are needed, as is usually the case with RVANNs [1].…”
Section: Advantages and Drawbacks Of Cvanns Applied To The Present Casesmentioning
confidence: 99%
See 2 more Smart Citations