2018
DOI: 10.1109/tip.2017.2774045
|View full text |Cite
|
Sign up to set email alerts
|

End-to-End Blind Image Quality Assessment Using Deep Neural Networks

Abstract: We propose a multi-task end-to-end optimized deep neural network (MEON) for blind image quality assessment (BIQA). MEON consists of two sub-networks-a distortion identification network and a quality prediction network-sharing the early layers. Unlike traditional methods used for training multi-task networks, our training process is performed in two steps. In the first step, we train a distortion type identification sub-network, for which large-scale training samples are readily available. In the second step, s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
278
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 451 publications
(279 citation statements)
references
References 35 publications
1
278
0
Order By: Relevance
“…To evaluate the performance of the proposed B-FEN model, we conduct the experiments on our IVIPC-DQA database. Due to the absence of specific quality metric for de-rained image, we compare the proposed B-FEN model with our previous B-GFN [16] and some representative general-purpose image quality assessment models, which include 10 opinion-aware (OA) metrics (i.e., BIQI [18], BLIINDS II [41], BRISQUE [22], DIIVINE [21], M3 [42], NFERM [43], TCLT [19], MEON [31], DB-CNN [32] and WaDIQaM [33]), and 4 opinion-unaware (OU) metrics (i.e., NIQE [44], ILNIQE [45], QAC [46], and LPSI [47]). Meanwhile, two popular unidirectional feature embedding networks, i.e., DenseNet-161 [37] and ResNet-152 [48] are also involved in our comparison, which are categorized as OA metric in the following section.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…To evaluate the performance of the proposed B-FEN model, we conduct the experiments on our IVIPC-DQA database. Due to the absence of specific quality metric for de-rained image, we compare the proposed B-FEN model with our previous B-GFN [16] and some representative general-purpose image quality assessment models, which include 10 opinion-aware (OA) metrics (i.e., BIQI [18], BLIINDS II [41], BRISQUE [22], DIIVINE [21], M3 [42], NFERM [43], TCLT [19], MEON [31], DB-CNN [32] and WaDIQaM [33]), and 4 opinion-unaware (OU) metrics (i.e., NIQE [44], ILNIQE [45], QAC [46], and LPSI [47]). Meanwhile, two popular unidirectional feature embedding networks, i.e., DenseNet-161 [37] and ResNet-152 [48] are also involved in our comparison, which are categorized as OA metric in the following section.…”
Section: Methodsmentioning
confidence: 99%
“…After building the IVIPC-DQA database, we further develop an efficient objective model to predict the human perception towards the de-rained image. Recently, many deep learning based NR-IQA models [31]- [33] have explored various efficient network structures for evaluating the uniform distortions, which achieve state-of-the-art quality prediction accuracy via a common unidirectional feature embedding (UFE) architecture as shown in Fig. 9 (a).…”
Section: Objective Model Of Dqamentioning
confidence: 99%
See 2 more Smart Citations
“…The blind IQA (BIQA) is a new trend of NR-IQA research, and it does not need to know the distortion types. Most existing BIQA methods are opinion-aware [10][11][12][13], which trains a regression model from numerous training images (source and distortion images) with the homologous human subjective scores. Specifically, the features are first obtained from the training images, and then both feature vectors and corresponding human subjective scores are employed to train a regression model.…”
Section: Introductionmentioning
confidence: 99%