2021
DOI: 10.3390/e23070856
|View full text |Cite
|
Sign up to set email alerts
|

An Information-Theoretic Perspective on Proper Quaternion Variational Autoencoders

Abstract: Variational autoencoders are deep generative models that have recently received a great deal of attention due to their ability to model the latent distribution of any kind of input such as images and audio signals, among others. A novel variational autoncoder in the quaternion domain H, namely the QVAE, has been recently proposed, leveraging the augmented second order statics of H-proper signals. In this paper, we analyze the QVAE under an information-theoretic perspective, studying the ability of the H-proper… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 53 publications
0
5
0
Order By: Relevance
“…First, since the quaternion weight matrix is composed of four sub-matrices W c , with c ∈ {0, 1, 2, 3}, each with 1/16 the parameters of the complete matrix W, and these sub-matrices are reused to build the final weight matrix W according to (4), QNNs save 75% of free parameters with respect to real-valued counterparts. Second, due to such sharing of weight sub-matrices, each parameter is multiplied by each dimension of the input (e.g., by each channel of RGB images, or of multichannel signals), thus capturing complex relationships among input dimensions and preserving their correlations [35], [36]. This allows QNNs to gain comparable results when processing multidimensional data despite the lower number of free parameters.…”
Section: A Quaternion Algebramentioning
confidence: 99%
“…First, since the quaternion weight matrix is composed of four sub-matrices W c , with c ∈ {0, 1, 2, 3}, each with 1/16 the parameters of the complete matrix W, and these sub-matrices are reused to build the final weight matrix W according to (4), QNNs save 75% of free parameters with respect to real-valued counterparts. Second, due to such sharing of weight sub-matrices, each parameter is multiplied by each dimension of the input (e.g., by each channel of RGB images, or of multichannel signals), thus capturing complex relationships among input dimensions and preserving their correlations [35], [36]. This allows QNNs to gain comparable results when processing multidimensional data despite the lower number of free parameters.…”
Section: A Quaternion Algebramentioning
confidence: 99%
“…These models exploit hypercomplex algebra properties, including the Hamilton product, to painstakingly design interactions among the imaginary units, thus involving 1/4 or 1/8 of free parameters with respect to real-valued models. Furthermore, thanks to the modeled interactions, hypercomplex networks capture internal latent relationships in multidimensional inputs and preserve preexisting correlations among input dimensions [25], [26], [27], [28], [29]. Therefore, the quaternion domain is particularly appropriate for processing 3D or 4D data, such as color images or (up to) four-channel signals [30], while the octonion one is suitable for 8-D inputs.…”
Section: Introductionmentioning
confidence: 99%
“…A variational autoencoder (QVAE) in the quaternion domain H leveraging the augmented second-order statistics of Hproper signals was analyzed in [63]. Augmented quaternions were used for remaining useful life (RUL) estimation of rolling bearings [64], for degradation prognostics of rolling bearings [65].…”
Section: Introductionmentioning
confidence: 99%