2006
DOI: 10.1109/lawp.2006.870366
|View full text |Cite
|
Sign up to set email alerts
|

High-accuracy neural-network-based array synthesis including element coupling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
13
0

Year Published

2008
2008
2022
2022

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(13 citation statements)
references
References 4 publications
0
13
0
Order By: Relevance
“…But these criteria are not accurate enough. In [27][28][29], MC is defined. The coupling between two transmitter antennas is defined in [24] as:…”
Section: Dipole Array Design For Bearable Coupling Between Adjacent Ementioning
confidence: 99%
“…But these criteria are not accurate enough. In [27][28][29], MC is defined. The coupling between two transmitter antennas is defined in [24] as:…”
Section: Dipole Array Design For Bearable Coupling Between Adjacent Ementioning
confidence: 99%
“…Employing machine learning (ML) in advanced computational electromagnetics and relevant applications were initiated long time ago [13][14][15][16]. Artificial neural networks (ANN) have been proposed for array synthesis [17], source reconstruction [18], NF to FF transformation [19], etc. Due to the recent blooming learning technologies, convolutional neural networks (ConvNets) [20,21] have become one of the most important new methods in deep learning applications.…”
Section: Introductionmentioning
confidence: 99%
“…The advantages of the proposed method are: (1) the proposed ConvNet model allows calculation using much fewer field samples than the conventional SRM; (2) without inverse solving, the proposed method avoids handling a singular numerical system; (3) the proposed ConvNet approach can make further field information to do reconstruction; (4) the proposed method has satisfactory accuracy and superior performance over traditional neural networks [17][18][19]. Compared to traditional neural networks (NNs) [17][18][19]24], ConvNet can more efficiently map the relations between inputs and outputs mainly by convolutional layer and activation layer [20,21]. It does not need very large number of neural units to handle problems such as source reconstruction.…”
Section: Introductionmentioning
confidence: 99%
“…Two major data‐learning approaches have been proposed: the empirical risk minimization (ERM) principle and the structural risk minimization (SRM) principle . The best‐known example of the ERM principle is the artificial neural networks . When the ERM is applied, the learning method intends to find a function that relates perfectly the available inputs with their corresponding outputs.…”
Section: Introductionmentioning
confidence: 99%