2023
DOI: 10.1016/j.engappai.2022.105814
|View full text |Cite
|
Sign up to set email alerts
|

RFA-Net: Residual feature attention network for fine-grained image inpainting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…Vo et al proposed a divide-and-conquer algorithm to address the sub-problems of degraded face reconstruction and classification by effectively applying multiple deep convolutional neural networks [31]. Another advanced texture-aware network named RFA-Net [32] employed a non-pooling residual CNN with three novel modules for finer image inpainting under the supervision of hybrid loss optimization, focusing on the semantic and texture details of the inpainting. Later, a generative adversarial network (GAN) was utilized to reconstruct faces by cooperating with a pre-trained convolutional neural network (CNN) while sustaining identity-variance features [33].…”
Section: Deep-learning-based Methodsmentioning
confidence: 99%
“…Vo et al proposed a divide-and-conquer algorithm to address the sub-problems of degraded face reconstruction and classification by effectively applying multiple deep convolutional neural networks [31]. Another advanced texture-aware network named RFA-Net [32] employed a non-pooling residual CNN with three novel modules for finer image inpainting under the supervision of hybrid loss optimization, focusing on the semantic and texture details of the inpainting. Later, a generative adversarial network (GAN) was utilized to reconstruct faces by cooperating with a pre-trained convolutional neural network (CNN) while sustaining identity-variance features [33].…”
Section: Deep-learning-based Methodsmentioning
confidence: 99%
“…This helps the model to better understand the details and structures in the image, resulting in improved restoration quality and accuracy. In addition, the utilization of efficient channel attention [38] and pixel attention [39] enables the model to selectively focus on important channels and pixels, thus reducing unnecessary computations and parametric quantities. The residual structure and skip connections help to avoid gradient explosion and network convergence difficulties.…”
Section: Multi-scale Fusion Attention Modulementioning
confidence: 99%
“…GAN has attracted increasing attention in image inpainting due to its rapid development and good generative performance [11,12,[21][22][23][24]. The method proposed by Yan et al [25] can yield outstanding results when solving rectangles or small holes.…”
Section: Related Workmentioning
confidence: 99%
“…GAN has attracted increasing attention in image inpainting due to its rapid development and good generative performance [11, 12, 2124]. The method proposed by Yan et al.…”
Section: Related Workmentioning
confidence: 99%