1991
DOI: 10.1016/0893-6080(91)90063-b
|View full text |Cite
|
Sign up to set email alerts
|

Encoding a priori information in feedforward networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
20
0
3

Year Published

1994
1994
2022
2022

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 55 publications
(23 citation statements)
references
References 6 publications
0
20
0
3
Order By: Relevance
“…Unfortunately, it has also been common practise that the lack of transparency of such b l a c k-box a p p r o a c hes has led to neglection of the available prior knowledge, and the resulting ill-conditioning has been handled implicitly using regularization methods like stopped training Ljung 1992, Ljung, Sj oberg andMcKelvey 1993). However, simple explicit regularization methods like default models (Thompson and Kramer 1994, Su et al 1992, Kramer et al 1992, Johansen and Foss 1992c, constraints (Joerding andMeador 1991, Thompson andKramer 1994), penalty on parameter magnitude (Weigend, Huberman and Rumelhart 1990), and smoothness regularization (Bishop 1991, Girosi et al 1994, has also been suggested. The optimization framework presented here will be useful for regularizing such complex parameter identi cation problems, and is useful as a complementary technique applied together with other approaches such as neural networks, in order to reduce the sensitivity with respect to the a priori choice of model structure.…”
Section: Discussionmentioning
confidence: 99%
“…Unfortunately, it has also been common practise that the lack of transparency of such b l a c k-box a p p r o a c hes has led to neglection of the available prior knowledge, and the resulting ill-conditioning has been handled implicitly using regularization methods like stopped training Ljung 1992, Ljung, Sj oberg andMcKelvey 1993). However, simple explicit regularization methods like default models (Thompson and Kramer 1994, Su et al 1992, Kramer et al 1992, Johansen and Foss 1992c, constraints (Joerding andMeador 1991, Thompson andKramer 1994), penalty on parameter magnitude (Weigend, Huberman and Rumelhart 1990), and smoothness regularization (Bishop 1991, Girosi et al 1994, has also been suggested. The optimization framework presented here will be useful for regularizing such complex parameter identi cation problems, and is useful as a complementary technique applied together with other approaches such as neural networks, in order to reduce the sensitivity with respect to the a priori choice of model structure.…”
Section: Discussionmentioning
confidence: 99%
“…Constraints involving inputs are called infinite constraints because they must hold for all (infinitely many) possible input combinations. Joerding and Meador (1991) transformed infinite constraints into finite constraints, which involve only the network weights. In this way, they enforced monotonicity, convexity, or concavity on a network output with respect to the network inputs.…”
Section: Introductionmentioning
confidence: 99%
“…The network estimated the specific growth rate, which was input into the component mass balances. Joerding and Meador (1991) used the parametric model as a normalization post-processor to force the outputs of the network to sum to one, which is important when estimating quantities such as component mole fractions in mixtures or the probabilities of mutually exclusive events.…”
Section: Introductionmentioning
confidence: 99%
“…These conditions can come from information or beliefs about the data generating process a network seeks to model, Joerding and Meador (1991), Joerding, Li, Hu, and Meador (1992), or may arise from their ability to improve network generalization, Bishop (1990), Psaltis and Neifield (1988). Geman, Bienenstock, and Doursat (1992) argues for imposing a priori constraints on neural networks as a way to circumvent the tradeoff between bias and variance in nonparametric estimation.…”
Section: Introductionmentioning
confidence: 98%