2017
DOI: 10.1109/tci.2016.2644865
|View full text |Cite
|
Sign up to set email alerts
|

Loss Functions for Image Restoration With Neural Networks

Abstract: Neural networks are becoming central in several areas of computer vision and image processing and different architectures have been proposed to solve specific problems. The impact of the loss layer of neural networks, however, has not received much attention in the context of image processing: the default and virtually only choice is 2. In this paper, we bring attention to alternative choices for image restoration. In particular, we show the importance of perceptually-motivated losses when the resulting image … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

17
953
1
3

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 2,009 publications
(1,080 citation statements)
references
References 31 publications
17
953
1
3
Order By: Relevance
“…For image prediction, common cost functions include the root-mean-square error between the predicted and reference images and measures of similarity, such as the structural similarity index metric. 6,7 One promising approach is replacing the cost function itself with a network whose goal is to make it optimally difficult to distinguish reference images from predicted images, an approach known as the "generative adversarial network" approach. 8 Generative adversarial networks strive to eliminate systematic differences between the predicted and reference images, which is highly desirable in the radiology setting.…”
Section: Training Simple Neural Network Deep Learning Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…For image prediction, common cost functions include the root-mean-square error between the predicted and reference images and measures of similarity, such as the structural similarity index metric. 6,7 One promising approach is replacing the cost function itself with a network whose goal is to make it optimally difficult to distinguish reference images from predicted images, an approach known as the "generative adversarial network" approach. 8 Generative adversarial networks strive to eliminate systematic differences between the predicted and reference images, which is highly desirable in the radiology setting.…”
Section: Training Simple Neural Network Deep Learning Modelsmentioning
confidence: 99%
“…The best solution to overfitting is collecting more training examples, though other solutions such as regularization and drop-out can also be used. 7,9 Another potential solution is data augmentation. Data augmentation is a method of increasing the amount of training data.…”
Section: Overfitting and Data Augmentationmentioning
confidence: 99%
“…Resolution enhancement and deblurring are ill‐posed problems, meaning there exists no unique solution. When CNNs are trained to minimize L2 loss between the full sampled images and the down‐sampled images, the estimated pixel values approximate the average of all possible solutions, resulting in perceptually blurred images . The problem of the pixel‐wise loss function has been addressed in several recent deep learning studies.…”
Section: Introductionmentioning
confidence: 99%
“…investigated various loss functions and demonstrated that the loss function using multiscale (MS)‐structural similarity (SSIM) offered better image quality than L2 loss, presumably due to the fact that MS‐SSIM is sensitive to local changes and thus correlates well with human perception . Also, CNNs with L1 loss showed perceptually better image quality than those with L2 loss . Instead of pixel‐wise loss, the concept of perceptual loss has been introduced to measure the differences between the high‐level features of the generated HR images and the ground truths .…”
Section: Introductionmentioning
confidence: 99%
“…Most of the previous restoration tasks employed 2 norm as loss function [19,20,23,35], however this will produce images suffer from blurring effect. Several studies have suggested that 1 loss is a better choice when performing image restoration tasks [46,55], thus in this work, we also prefer 1 as loss function to train our network. Given a set of training samples {(X…”
Section: Loss Functionmentioning
confidence: 99%