Conference Record of the Thirty-Eighth Asilomar Conference on Signals, Systems and Computers, 2004.
DOI: 10.1109/acssc.2004.1399380
|View full text |Cite
|
Sign up to set email alerts
|

Distributed camera network localization

Abstract: Localization, estimating the positions and orientations of a set of cameras, is a critical first step in camera-based sensor network applications such as geometric estimation, scene reconstruction, and motion tracking. We propose a new distributed localization algorithm for networks of cameras with sparse overlapping view structure that is energy efficient and copes well with networking dynamics. The distributed nature of the localization computations can result in order-ofmagnitude savings in communication en… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 40 publications
(26 citation statements)
references
References 10 publications
0
26
0
Order By: Relevance
“…Mantzel et al extract feature points by analyzing tracked motion, and correlate the features across views using time synchronization [1]. Mantzel et al compensate for inaccurate correlations by determining a subset that produces the essential matrix estimate with the least error according to the epipolar constraint.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Mantzel et al extract feature points by analyzing tracked motion, and correlate the features across views using time synchronization [1]. Mantzel et al compensate for inaccurate correlations by determining a subset that produces the essential matrix estimate with the least error according to the epipolar constraint.…”
Section: Related Workmentioning
confidence: 99%
“…The most recent solutions opportunistically search for robustly identifiable world features and correlate them between pairs of cameras with view overlaps [1], [2], [3]. Correlated features are used to estimate either the essential or fundamental matrix for two view overlapping cameras and which decomposed provides the camera pair's relative position and orientation, which is the data needed for network localization [4], [5].…”
Section: Introductionmentioning
confidence: 99%
“…We are not aware of any previous distributed version of GPCA [10]. Triangulation of 3-D points in a camera network has been studied in the context of camera localization [4,8]. In those works, however, the triangulation is performed independently at each camera, while our approach uses all the images simultaneously.…”
Section: Introductionmentioning
confidence: 99%
“…Cameras with overlapping views can observe a set of feature points and deduce their relative positions. However, the obtained coordinates are relative to a scaling factor [7] that needs to be determined via other means. In simultaneous localization and tracking, a moving object is observed and tracked instead of a set of feature points [8].…”
Section: Rangingmentioning
confidence: 99%