2005
DOI: 10.1007/11008941_35
|View full text |Cite
|
Sign up to set email alerts
|

Applying Active Vision and SLAM to Wearables

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
22
0

Year Published

2007
2007
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 16 publications
(22 citation statements)
references
References 20 publications
0
22
0
Order By: Relevance
“…It was demonstrated operating within the Extended Kalman Filter (EKF) monoSLAM algorithm of Davison et al [3,4], a method which has already proved a useful vehicle for AR using wearable vision [16,2].…”
Section: Introductionmentioning
confidence: 99%
“…It was demonstrated operating within the Extended Kalman Filter (EKF) monoSLAM algorithm of Davison et al [3,4], a method which has already proved a useful vehicle for AR using wearable vision [16,2].…”
Section: Introductionmentioning
confidence: 99%
“…When an active wearable camera supplies the imagery, the system has some autonomy to explore the world itself. As mentioned in the introduction, in [18] 3D point positions in the map were hand-labelled to allow a remote operator to command an active wearable to fixate on objects of interest while continuing to map. This method can now be automated to command the system to locate and fixate upon particular objects, without intervention of the wearer.…”
Section: Discussionmentioning
confidence: 99%
“…Key to providing a wearable camera system with a greater degree of autonomy are the abilities both to locate the camera in the environment and to determine what is where around it [18]. In [18] it was demonstrated that once a partial 3D map of the camera's environment is established, locations can be selected for the camera to fixate upon, counteracting movements of the wearer, while continuing both to accumulate scene structure and to determine the camera pose.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Although the fusion of cameras and inertial sensors has previously been considered for motion tracking [29][23], augmented reality [12], and SLAM (simultaneous localization and mapping) [20], the work presented in this paper bears more similarity to the multiple-target tracking literature. The reason is that the main problems that we tackle are data association and motion correspondence, rather than exact position estimation.…”
Section: Related Workmentioning
confidence: 99%