2022
DOI: 10.3390/electronics11193067
|View full text |Cite
|
Sign up to set email alerts
|

Methods for Model Complexity Reduction for the Nonlinear Calibration of Amplifiers Using Volterra Kernels

Abstract: Volterra models allow modeling nonlinear dynamical systems, even though they require the estimation of a large number of parameters and have, consequently, potentially large computational costs. The pruning of Volterra models is thus of fundamental importance to reduce the computational costs of nonlinear calibration, and improve stability and speed, while preserving accuracy. Several techniques (LASSO, DOMP and OBS) and their variants (WLASSO and OBD) are compared in this paper for the experimental calibratio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 32 publications
(63 reference statements)
0
2
0
Order By: Relevance
“…On the other hand, the Optimal Brain Surgeon (OBS) algorithm [18], [21], [22] starts from the full model and removes one variable at the time, selecting the one that influences the residual error the less. The impact of each variable on the error depends on the curvature of the error curve and the value of the parameter, and variables which are either small or have small curvature are removed first.…”
Section: A Pruning Via Omp and Obsmentioning
confidence: 99%
See 1 more Smart Citation
“…On the other hand, the Optimal Brain Surgeon (OBS) algorithm [18], [21], [22] starts from the full model and removes one variable at the time, selecting the one that influences the residual error the less. The impact of each variable on the error depends on the curvature of the error curve and the value of the parameter, and variables which are either small or have small curvature are removed first.…”
Section: A Pruning Via Omp and Obsmentioning
confidence: 99%
“…The two techniques operate in opposite directions: the OMP from the simplest to the most complex model, and the OBS from the complete to the simplest model. The best technique is the one that selects the minimum-error pruned model for a given complexity: in general, none of the two techniques outperforms the other for all desired pruning levels, and combining the two allows finding a better approximation of the complexity-accuracy trade-off [18].…”
Section: A Pruning Via Omp and Obsmentioning
confidence: 99%