Proceedings of the 30th ACM International Conference on Multimedia 2022
DOI: 10.1145/3503161.3548337
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Hypergraph Convolutional Network for No-Reference 360-degree Image Quality Assessment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 19 publications
(6 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…HGNN extended the spectral method of graph convolutional neural network [32] to hypergraph convolution. As the field of hypergraph theory has advanced, more hypergraph neural networks have been further proposed, such as hypergraph convolutional neural networks [33], hypergraph recurrent neural networks [34], and hypergraph generative adversarial networks [35], which have demonstrated superior effectiveness in various fields, including recommendation engines [36,37], computer vision [38,39], bioinformatics [40,41]. HyperGCN-WSS [42] was developed to overcome the obstacle of requiring densely annotated images for semantic segmentation.…”
Section: Hypergraph Learningmentioning
confidence: 99%
“…HGNN extended the spectral method of graph convolutional neural network [32] to hypergraph convolution. As the field of hypergraph theory has advanced, more hypergraph neural networks have been further proposed, such as hypergraph convolutional neural networks [33], hypergraph recurrent neural networks [34], and hypergraph generative adversarial networks [35], which have demonstrated superior effectiveness in various fields, including recommendation engines [36,37], computer vision [38,39], bioinformatics [40,41]. HyperGCN-WSS [42] was developed to overcome the obstacle of requiring densely annotated images for semantic segmentation.…”
Section: Hypergraph Learningmentioning
confidence: 99%
“…This makes the model significantly complex. Building on the VGCN architecture, Fu et al [29] proposed a similar solution that models interactions among viewports using a hyper-GNN [30].…”
Section: Multichannel Modelsmentioning
confidence: 99%
“…This makes the model significantly complex. Inspired by the VGCN architecture, Fu et al [35] proposed a similar architecture, where the interaction among viewports is modeled using hyper-GNN [36]. Miaomiao et al [26] integrated saliency prediction in the design of a CNN model combining SP-NET [37] for saliency features' extraction, and ResNet-50 [30] for visual features' extraction.…”
Section: Related Workmentioning
confidence: 99%
“…The selected models include PSNR, SSIM, MS-SSIM [62], FSIM [63], BRISQUE [64], and BMPRI [65], DB-CNN [34], and DipIQ [66], representing 2D-IQA models. S-PSNR [6], WS-PSNR [6], SSP-BOIQA [11], Yun et al [12], MC360IQA [20], Zhou et al [23], VGCN [25], AHGCN [35], and S 3 DAVS [16], representing 360-IQA models. MC360IQA, Zhou et al, VGCN, and AHGCN are all deep learning-based solutions using the multichannel paradigm with a varying number of channels, from six to twenty, making them highly complex models.…”
Section: Performance Comparison With Sota Modelsmentioning
confidence: 99%