2020
DOI: 10.21609/jiki.v13i1.761
|View full text |Cite
|
Sign up to set email alerts
|

Fully Convolutional Variational Autoencoder For Feature Extraction Of Fire Detection System

Abstract: This paper proposes a fully convolutional variational autoencoder (VAE) for features extraction from a large-scale dataset of fire images. The dataset will be used to train the deep learning algorithm to detect fire and smoke. The features extraction is used to tackle the curse of dimensionality, which is the common issue in training deep learning with huge datasets. Features extraction aims to reduce the dimension of the dataset significantly without losing too much essential information. Variational autoenco… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 14 publications
0
5
0
Order By: Relevance
“…Both PCA and the β-VAE can be used for dimensionality reduction and feature extraction [8,9]. The choice of suitable features, however, requires assessment metrics.…”
Section: Methodsmentioning
confidence: 99%
“…Both PCA and the β-VAE can be used for dimensionality reduction and feature extraction [8,9]. The choice of suitable features, however, requires assessment metrics.…”
Section: Methodsmentioning
confidence: 99%
“…This technique optimizes the distribution parameter while maintaining the ability to randomly sample from that distribution. The challenge of the VAE is to learn a meaningful and generalizable latent space despite having far fewer units in the encoding than the input (Nugroho et al 2020). The architecture of Variational Autoencoder is visualized on Figure 2:…”
Section: Variational Autoencodermentioning
confidence: 99%
“…The Model Classifier can be trained and tested using the output. An autoencoder was used to extract the dimensions, after which they were converted into features for 27,26,25,24,23,22,21,20,19,18,14,10, and 6 neurons. The Autoencoder (AE) model is configured with the activation function at the bottleneck, encoder 1, encoder 2, decoder 1, and encoder 2 using Relu, the kernel initializer used is Random Normal with a standard deviation of 0.01, the initializer bias used is Zeros, and linear activation is used on the output layers.…”
Section: Architecture Of the Autoencodermentioning
confidence: 99%
“…[23]. The autoencoder is used to find new input representations without losing too much information and the input can be reconstructed [24]. Inputs in autoencoder can be reconstructed effectively with minimum reconstruction error [25].…”
Section: Introductionmentioning
confidence: 99%