2001
DOI: 10.1016/s0893-6080(00)00098-8
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian approach for neural networks—review and case studies

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
163
0
2

Year Published

2002
2002
2023
2023

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 282 publications
(166 citation statements)
references
References 23 publications
1
163
0
2
Order By: Relevance
“…These works are reviewed in Bishop (1995), MacKay (1995) and Lampinen and Vehtari (2001). Unlike standard neural network design, the Bayesian approach considers probability distributions in the weight space of the network.…”
Section: Introductionmentioning
confidence: 99%
“…These works are reviewed in Bishop (1995), MacKay (1995) and Lampinen and Vehtari (2001). Unlike standard neural network design, the Bayesian approach considers probability distributions in the weight space of the network.…”
Section: Introductionmentioning
confidence: 99%
“…For example: (1) for seismic event classification (Dystart and Pulli, 1990), (2) well log analysis (Aristodemou et al, 2005;Maiti et al, 2007;Maiti and Tiwari, 2007, 2010b, (3) first arrival picking (Murat and Rudman, 1992), (4) earthquake prediction (Feng et al, 1997), (5) inversion (Raiche, 1991;Devilee et al, 1999), (6) parameter estimation in geophysics (Macias et al, 2000), (7) prediction of aquifer water level (Coppola Jr. et al, 2005), (8) magneto-telluric data inversion (Spichak and Popova, 2000), (9) magnetic interpretations (Bescoby et al, 2006), (10) signal discrimination (Maiti and Tiwari, 2010a), (11) modeling (Sri Lakshmi and Tiwari, 2009), (12) DC resistivity inversion (Qady and Ushijima, 2001;Lampinen and Vehtari, 2001;Singh et al, 2005Singh et al, , 2006Singh et al, , 2010.…”
Section: S Maiti Et Al: Inversion Of DC Resistivity Data Of Koyna Rmentioning
confidence: 99%
“…where x is a d-dimensional input vector, w denotes the weights, and indices i and j correspond to input and hidden units, respectively (Lampinen & Vehtari, 2001). Arrangement of layers and units in an ANN called architecture (Doan & Yuiliong, 2004).…”
Section: Artificial Neural Networkmentioning
confidence: 99%