2022 International Joint Conference on Neural Networks (IJCNN) 2022
DOI: 10.1109/ijcnn55064.2022.9892119
|View full text |Cite
|
Sign up to set email alerts
|

Hypercomplex Image- to- Image Translation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 24 publications
0
6
0
Order By: Relevance
“…see for example [127], [129], [97], [82], [132], [20], [122], [74], [32], [116], [55], [24]. Another method, proposed by Gaudet and Maida [19], is to use a residual block:…”
Section: A Visionmentioning
confidence: 99%
See 3 more Smart Citations
“…see for example [127], [129], [97], [82], [132], [20], [122], [74], [32], [116], [55], [24]. Another method, proposed by Gaudet and Maida [19], is to use a residual block:…”
Section: A Visionmentioning
confidence: 99%
“…The model is presented in Figure 7c) and 7d). Thereafter, Grassucci et al [132] proposed the quaternion-valued version of the StarGANv2 model [146]. It is composed of the generator, mapping, encoding, and discriminator networks; this model was evaluated on an image to image translation task using the CelebA-HQ dataset [131].…”
Section: A Visionmentioning
confidence: 99%
See 2 more Smart Citations
“…Furthermore, since the proposed wavelet-based preprocessing is performed in the quaternion domain, we can easily seize quaternion-valued neural networks (QNNs) for the learning stage to further exploit the capabilities of hypercomplex algebra. Indeed, QNNs have been proven to achieve interesting results in processing natural images [27]- [31]. Due to the four-dimensional nature of quaternions, QNNs properly handle inputs with 4 dimensions/channels, thus restricting the use of QNNs to a few sets of data.…”
Section: Introductionmentioning
confidence: 99%