2020
DOI: 10.48550/arxiv.2008.11713
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

NAS-DIP: Learning Deep Image Prior with Neural Architecture Search

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 59 publications
0
4
0
Order By: Relevance
“…Finally, we would like to emphasize that the default architecture of DIP is employed just for justifying the function of robust statistics. Surely, it is possible to further improve the proposed algorithm by using the neural architecture search reported in [50].…”
Section: Discussionmentioning
confidence: 99%
“…Finally, we would like to emphasize that the default architecture of DIP is employed just for justifying the function of robust statistics. Surely, it is possible to further improve the proposed algorithm by using the neural architecture search reported in [50].…”
Section: Discussionmentioning
confidence: 99%
“…These optimized network architectures have been shown to enhance the performance of classic UNNP methods. Instead of using hand-crafted neural networks for UNNP, Chen et al [69] proposed the use of deep reinforcement learning (DRL) to search for the best possible neural network architecture for a specific problem. Their work is inspired by the NAS algorithms [77], [78], [79], which involve the search of optimal neural networks that give the top performance on large datasets.…”
Section: Neural Architecture Search For Unnpmentioning
confidence: 99%
“…4: Different UNNP architectures proposed in the literature. Relevant papers: (a) [6] ; (b) [12] ; (c) [7], [25], [49] ; (d) [37], [68], [69] ; (e) [28], [70] ; (f) [67] ; (g) [39]. Fig.…”
Section: Untrained Neural Network Priors: An Introductionmentioning
confidence: 99%
“…The output at the first iteration contains randomization due to random code vector z and the optimizer iteratively optimizes neural network's parameters θ for the given input image using backpropagation. It has been shown in the literature [19], [36] that the choice of neural network architectures has a direct impact on the performance, e.g., we can design/handcraft a particular neural network architecture for modeling a specific image [21]. This serves as a solution space when modeling images using untrained neural networks priors.…”
Section: Untrained and Pretrained Cdipsmentioning
confidence: 99%