2020
DOI: 10.1007/978-3-030-67070-2_32
|View full text |Cite
|
Sign up to set email alerts
|

SA-AE for Any-to-Any Relighting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 8 publications
(12 citation statements)
references
References 24 publications
0
12
0
Order By: Relevance
“…Any-to-any Relighting: We modify our AMIDR-Net by changing the number of input channels and removing the skip connections corresponding to the depth maps and train it on VIDIT 2020 dataset. We compare our modified AMIDR-Net with state of the art in Table 6, where SA-AE [19] is the winner of AIM 2020 any-to-any relighting track and [9] is an encoder-decoder network proposed by another participant of the same challenge. We also compare our method with an adapted version of [53], which is originally proposed for portrait relighting.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Any-to-any Relighting: We modify our AMIDR-Net by changing the number of input channels and removing the skip connections corresponding to the depth maps and train it on VIDIT 2020 dataset. We compare our modified AMIDR-Net with state of the art in Table 6, where SA-AE [19] is the winner of AIM 2020 any-to-any relighting track and [9] is an encoder-decoder network proposed by another participant of the same challenge. We also compare our method with an adapted version of [53], which is originally proposed for portrait relighting.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
“…2) Direct Relighting (DR): In addition to the intrinsic decomposition of the images, we also follow the end-to-end learning method as in state of the art [19,7,38] for learning a mapping function between the two lighting settings: f (I) = I direct-relit . Where f denotes the mapping function learned by neural network model.…”
Section: Fusion Strategymentioning
confidence: 99%
“…First, the illuminant direction and the temperature are pre-defined [2,6], which is known as one-to-one relighting. Second, the ambient condition is based on a guided image [3], which is known as any-to-any relighting. Both of them are very similar to other low-level vision tasks like image dehazing [14] image deraining [13], image smoke removal [19], image desnowing [20], reflection removal [21], and underwater image enhancement [12].…”
Section: Deep Learning Based Image Relightingmentioning
confidence: 99%
“…On the other hand, the style transferring focuses on the texture rendering. To address this issue, recently, many deep learning-based image relighting algorithms [2,3,4,5,6] are proposed because the deep convolutional neural networks (CNNs) have achieved a lot of successful improvements in many computer vision tasks. They develop the neural networks and follow the endto-end manner to directly generate relit images without assuming any physical priors.…”
Section: Introductionmentioning
confidence: 99%
“…As shown in Fig. 10, the team presents the novel Self-Attention AutoEncoder (SA-AE) [21] model for generating a relit image from a source image to match the illumination settings of a guide image. In order to reduce the learning difficulty, the team adopts an implicit scene representation [59] learned by the encoder to render the relit images using the decoder.…”
Section: Other Submitted Solutionmentioning
confidence: 99%