2019
DOI: 10.3390/min9050270
|View full text |Cite
|
Sign up to set email alerts
|

A Multi-Convolutional Autoencoder Approach to Multivariate Geochemical Anomaly Recognition

Abstract: The spatial structural patterns of geochemical backgrounds are often ignored in geochemical anomaly recognition, leading to the ineffective recognition of valuable anomalies in geochemical prospecting. In this contribution, a multi-convolutional autoencoder (MCAE) approach is proposed to deal with this issue, which includes three unique steps: (1) a whitening process is used to minimize the correlations among geochemical elements, avoiding the diluting of effective background information embedded in redundant … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(8 citation statements)
references
References 59 publications
(71 reference statements)
0
8
0
Order By: Relevance
“…In the case of image data composed of horizontal and vertical pixels, compressing the image is performed at the encoder, and the feature values of the compressed image are shown in the latent. With the information of the latent feature values of the compressed image, the decoder can reconstruct the image (Chandar AP et al, 2014;Chen et al, 2019;Lyons et al, 2014;Ma et al, 2016;Wen & Zhang, 2018;Zhang & Ye, 2019). The AE's learning result is shown as an output identical to the input based on SL.…”
Section: Operation Of Caementioning
confidence: 99%
See 2 more Smart Citations
“…In the case of image data composed of horizontal and vertical pixels, compressing the image is performed at the encoder, and the feature values of the compressed image are shown in the latent. With the information of the latent feature values of the compressed image, the decoder can reconstruct the image (Chandar AP et al, 2014;Chen et al, 2019;Lyons et al, 2014;Ma et al, 2016;Wen & Zhang, 2018;Zhang & Ye, 2019). The AE's learning result is shown as an output identical to the input based on SL.…”
Section: Operation Of Caementioning
confidence: 99%
“…Figure 6 shows the architectures for various types of AE. A typical AE places the same number of layers and nodes for the encoder and the decoder (Chen et al, 2019;Happel & Murre, 1994;LeCun et al, 2015;Wen & Zhang, 2018). Through FNN, which compressed the image data in AE to extract a feature value, this study predicted the latent value and used the AE's decoder to reconstruct it as an image.…”
Section: Reconstruction Of Autoencoder Schemesmentioning
confidence: 99%
See 1 more Smart Citation
“…In order to use the advantages of both autoencoders and CNNs, CAEs are used in this study, which usually use convolution and pooling layers to extract the key features of the input data and compress them (encoding) and deconvolution and unpooling layers to reconstruct the original data from the compressed form (decoding) [36]. Figure 2 shows an illustrative example of the structure of a two-dimensional (2D) CAE.…”
Section: Overview Of Autoencodersmentioning
confidence: 99%
“…Applying the deep learning algorithms as a subcategory of machine learning algorithms can lead to improving the accuracy of classification or prediction by replacing the manual selection [45]. Such techniques have been employed in recognizing geochemical anomalies related to mineralization via deep autoencoder networks [46], deep variational autoencoder network [45], convolutional autoencoder networks [91], and combining deep learning with other anomaly detection methods [54,56].…”
Section: The Outputs Predicted/modeled Using ML In the Selected Literature And The Inputs Utilizedmentioning
confidence: 99%