Distributed Video Sensor Networks 2011
DOI: 10.1007/978-0-85729-127-1_25
|View full text |Cite
|
Sign up to set email alerts
|

Collaborative Face Recognition Using a Network of Embedded Cameras

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2011
2011
2021
2021

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(15 citation statements)
references
References 7 publications
0
15
0
Order By: Relevance
“…Therefore, without loss of generality, in the following we extensively and systematically study the impact 5 For an original image represented with -bit luminance depth, and the corresponding encoded and decoded image , the PSNR is defined as 6 These values have been obtained for an absolute coding cost of view-0 AVC coding of 48 Kb/s, using a minimum QP equal to 32. of the estimated CSA on the observed MVC efficiency, and we assume the encoder settings to be fixed as in Table I. On the Akko&Kayo sequence, we also calculated the lowcomplexity cross-correlation based CSA estimator between the -th frames of reference view 0 and of the -th sequence view, namely .…”
Section: B Experimental Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, without loss of generality, in the following we extensively and systematically study the impact 5 For an original image represented with -bit luminance depth, and the corresponding encoded and decoded image , the PSNR is defined as 6 These values have been obtained for an absolute coding cost of view-0 AVC coding of 48 Kb/s, using a minimum QP equal to 32. of the estimated CSA on the observed MVC efficiency, and we assume the encoder settings to be fixed as in Table I. On the Akko&Kayo sequence, we also calculated the lowcomplexity cross-correlation based CSA estimator between the -th frames of reference view 0 and of the -th sequence view, namely .…”
Section: B Experimental Resultsmentioning
confidence: 99%
“…The availability of different views of the same scene enables multi-view oriented processing techniques, such as video scene summarization [3], moving object detection [4], face recognition [5], depth estimation [6], among others. Enhanced application-layer services that rely on these techniques can be envisaged, including multi-person tracking, biometric identification, ambience intelligence, and free-view point video monitoring.…”
mentioning
confidence: 99%
“…An example of such a system is the combination of a fixed camera and PTZ camera that is used for closeup tracking of humans and subsequent identification. In our approach instead of continuously tracking an individual at close quarters to eventually get a good view that is suitable for recognition, we rely on redundancy offered by multiple camera views to opportunistically acquire a suitable face image for identification [25].…”
Section: Related Workmentioning
confidence: 99%
“…When different sensors cameras acquire different views of the same scene, multi-view oriented processing techniques enable tasks such as video scene summarization [3], moving object detection [4], face recognition [5], depth estimation for 3D video rendering [6], surveillance, and social robotics to cite a few.…”
Section: Introductionmentioning
confidence: 99%