Advanced Mapping of Environmental Data 2008
DOI: 10.1002/9780470611463.ch4
|View full text |Cite
|
Sign up to set email alerts
|

Spatial Data Analysis and Mapping Using Machine Learning Algorithms

Abstract: The Australian machine-learning workflows apply fusion, clustering, and estimation operations to hydrogeophysical data for deriving hydrostratigraphic units (HSUs). Data fusion is performed by training a self-organizing map (SOM) with these data. The application of Davies-Bouldin criteria to K-means clustering of SOM nodes determines the number and location of HSUs. Estimation is handled by iterative least-squares minimization of the SOM quantization and topographical errors. Two workflows provide 3D character… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 23 publications
0
7
0
Order By: Relevance
“…In this study, six different training functions, namely trainlm (Levenberg-Marquardt) (More, 1978), traincgf (Fletcher-Powell Conjugate Gradient) (Scales, 1985), traingd (Gradient Descent), traingdx (Gradient descent with momentum and adaptive learning rule backpropagation) (Beale, 1972), trainrp (Resilient Backpropagation) (Riedmiller and Braun, 1993) and trainscg (Scaled Conjugate Gradient) (Moller, 1993) were used. (Ratle et al, 2008). …”
Section: Multi-layer Perceptron (Mlp)mentioning
confidence: 99%
See 4 more Smart Citations
“…In this study, six different training functions, namely trainlm (Levenberg-Marquardt) (More, 1978), traincgf (Fletcher-Powell Conjugate Gradient) (Scales, 1985), traingd (Gradient Descent), traingdx (Gradient descent with momentum and adaptive learning rule backpropagation) (Beale, 1972), trainrp (Resilient Backpropagation) (Riedmiller and Braun, 1993) and trainscg (Scaled Conjugate Gradient) (Moller, 1993) were used. (Ratle et al, 2008). …”
Section: Multi-layer Perceptron (Mlp)mentioning
confidence: 99%
“…where w i are the weights corresponding to each neuron connection and w 0 is an additive bias corresponding to the entire hidden layer ( Figure 1); s is a sigmoid activation function which represents a nonlinear element in MLP: v i is a activation function steepness parameter (Ratle et al, 2008). The activation functions are log-sigmoid and tan-sigmoid are respectively given in equation 4.…”
Section: Multi-layer Perceptron (Mlp)mentioning
confidence: 99%
See 3 more Smart Citations