2007
DOI: 10.1109/tmag.2007.892480
|View full text |Cite
|
Sign up to set email alerts
|

Microwave Devices and Antennas Modelling by Support Vector Regression Machines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
51
0

Year Published

2009
2009
2022
2022

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 123 publications
(54 citation statements)
references
References 9 publications
0
51
0
Order By: Relevance
“…Theoretically SIW resonances are the complex frequencies f for which Equation 2 (Angiulli, 2007;Angiulli et al, 2007;2009…”
Section: Ajasmentioning
confidence: 99%
“…Theoretically SIW resonances are the complex frequencies f for which Equation 2 (Angiulli, 2007;Angiulli et al, 2007;2009…”
Section: Ajasmentioning
confidence: 99%
“…Higher-level properties of input characteristics have also been modelled. Such studies have dealt with resonant frequency modelling by neural-network-based methods [7,8] and GPR [9]; resonant input impedance modelling using support vector machines [10]; and impedance bandwidth modelling by neural networks (e.g. [11]).…”
Section: Introductionmentioning
confidence: 99%
“…Even so, it appears as if there have been hardly any attempts to formally construct surrogate models that can account for such effects (e.g. none of [2][3][4][5][6][7][8][9][10][11][12] included substrate/ground-plane dimensions as model input variables; in [17] however, a neural network was used to model self-and mutual-admittances of a monopole array as they varied with ground-plane dimensions, amongst other variables).…”
Section: Introductionmentioning
confidence: 99%
“…Even for the multilayer perceptron (MLP), which is ubiquitous in its application to supervised learning problems, the number of hidden units that should be chosen often is an open question, relying for a solution on a trial-and-error approach or experience [12,13]. While adaptive techniques have been proposed for deleting or adding hidden units to a neural network during training [14], this adds further complexity to the modeling process.…”
Section: Introductionmentioning
confidence: 99%
“…One reason is that a Gaussian process model requires training of far fewer parameters (in the order of the dimension of the input vectors) than, e.g., a MLP with one hidden layer, where the number of weights to be learned typically is in the order of N i ×N h +N h ×N o , with N i , N h , and N o the number of input, hidden, and output nodes respectively. A Gaussian process is an instance of a so-called kernel machine; it is differentiated by its probablistic basis from the support vector regression machine (SVRM), another kernel machine that has very recently found application in antenna-related problems (e.g., [13]). …”
Section: Introductionmentioning
confidence: 99%