2010
DOI: 10.1109/tmm.2009.2036285
|View full text |Cite
|
Sign up to set email alerts
|

Synchronization of Multiple Camera Videos Using Audio-Visual Features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
19
0

Year Published

2010
2010
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 40 publications
(45 citation statements)
references
References 14 publications
0
19
0
Order By: Relevance
“…Authors of [19] proposed a multilayered audiovisual streaming scheme to deliver layered audiovisual data synchronously, which is called ML-AVSS. Authors of [20] proposed an automated synchronization approach based on detecting and matching audio and video features extracted from the recorded content. These work has focused on the multimedia stream synchronization within the traditional and homogeneous networks.…”
Section: Audio/visual Synchronizationmentioning
confidence: 99%
“…Authors of [19] proposed a multilayered audiovisual streaming scheme to deliver layered audiovisual data synchronously, which is called ML-AVSS. Authors of [20] proposed an automated synchronization approach based on detecting and matching audio and video features extracted from the recorded content. These work has focused on the multimedia stream synchronization within the traditional and homogeneous networks.…”
Section: Audio/visual Synchronizationmentioning
confidence: 99%
“…For the special case of videos capturing a scene with active sound sources in sufficient quality, audio-based approaches can also be considered for synchronization, as for instance proposed in [8].…”
Section: Related Workmentioning
confidence: 99%
“…We address this problem by exploiting a novel projective-invariant descriptor based on the cross ratio to obtain the matched trajectory points between the two input videos. So far, numerous video synchronization methods have been presented in the previous works, which are mainly classified into two categories: intensity-based ones [9][10][11][12][13] and feature-based ones [14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29]. The intensity-based methods usually rely on colors, intensities, or intensity gradients to achieve the temporal synchronization of overlapping videos.…”
Section: Introductionmentioning
confidence: 99%