2003
DOI: 10.1016/s0888-613x(03)00021-5
|View full text |Cite
|
Sign up to set email alerts
|

A survey on universal approximation and its limits in soft computing techniques

Abstract: This paper deals with the approximation behaviour of soft computing techniques. First, we give a survey of the results of universal approximation theorems achieved so far in various soft computing areas, mainly in fuzzy control and neural networks. We point out that these techniques have common approximation behaviour in the sense that an arbitrary function of a certain set of functions (usually the set of continuous function, C) can be approximated with arbitrary accuracy e on a compact domain. The drawback o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
41
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 99 publications
(41 citation statements)
references
References 45 publications
0
41
0
Order By: Relevance
“…Universal approximator [10] Number of rules grows exponentially for increased accuracy [10] Suffers from existential theorem [18]. Difficulty in interpreting the results of defuzzification process [19].…”
Section: Fuzzymentioning
confidence: 99%
“…Universal approximator [10] Number of rules grows exponentially for increased accuracy [10] Suffers from existential theorem [18]. Difficulty in interpreting the results of defuzzification process [19].…”
Section: Fuzzymentioning
confidence: 99%
“…In addition to overcome the dual DP curse of dimensionality and modeling, DPS problems can be solved using multiobjective evolutionary algorithms (MOEAs) to obtain an approximation of the Pareto front in a single run of the algorithm. The e↵ectiveness of this approach depends on the flexibility of the selected class of functions used to define the policy (Tikk et al, 2003) and on the ability of the optimization algorithm to deal with a large number of objectives. In this work, we use gaussian Radial Basis Functions (RBFs) to parameterize the policies as they are capable of representing functions for a large class of problems (Busoniu et al, 2011).…”
Section: Multi-objective Problem Formulationmentioning
confidence: 99%
“…The model produces another error called the test error, which is normally higher than the training error. Tikk et al (2003) and Mhaskar (1996) have provided a detailed survey on the evolution of approximation theory and the use of neural networks for approximation purposes. In both papers, the mathematical background is being provided, which has been used as the background for other researchers who studied the approximation properties of various neural networks.…”
Section: Creation Of Model -The Training Processmentioning
confidence: 99%