2021
DOI: 10.48550/arxiv.2111.08567
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Joint Learning of Visual-Audio Saliency Prediction and Sound Source Localization on Multi-face Videos

Abstract: Visual and audio events simultaneously occur and both attract attention. However, most existing saliency prediction works ignore the influence of audio and only consider vision modality. In this paper, we propose a multitask learning method for visual-audio saliency prediction and sound source localization on multi-face video by leveraging visual, audio and face information. Specifically, we first introduce a large-scale database of multi-face video in visual-audio condition (MVVA), containing eye-tracking dat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 58 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?