The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2004
DOI: 10.1002/rob.10124
|View full text |Cite
|
Sign up to set email alerts
|

A Flexible Software Architecture for Hybrid Tracking

Abstract: Fusion of vision-based and inertial pose estimation has many high-potential applications in navigation, robotics, and augmented reality. Our research aims at the development of a fully mobile, completely self-contained tracking system, that is able to estimate sensor motion from known 3D scene structure. This requires a highly modular and scalable software architecture for algorithm design and testing. As the main contribution of this paper, we discuss the design of our hybrid tracker and emphasize important f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2006
2006
2014
2014

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(8 citation statements)
references
References 18 publications
0
8
0
Order By: Relevance
“…Also in Eino et al [15], where inertial data from an IMU is fused with the velocity estimation from a vision algorithm, no details about the scale problem are reported. Ribo et al [16] proposed an EKF to fuse vision and inertial data to estimate the 6DoF attitude. There and in [10,[17][18][19] the authors use a priori knowledge to overcome the scale problem.…”
Section: Related Workmentioning
confidence: 99%
“…Also in Eino et al [15], where inertial data from an IMU is fused with the velocity estimation from a vision algorithm, no details about the scale problem are reported. Ribo et al [16] proposed an EKF to fuse vision and inertial data to estimate the 6DoF attitude. There and in [10,[17][18][19] the authors use a priori knowledge to overcome the scale problem.…”
Section: Related Workmentioning
confidence: 99%
“…Ribo et al [8,9] present a wearable AR system that is mounted on a helmet. It consists of a real-time 3D visualization subsystem (composed by a stereo see-through HMD) and a real-time tracking subsystem (composed by a camera and an IMU).…”
Section: Previous Work On Hybrid Trackingmentioning
confidence: 99%
“…The 3D pose is computed from artificial landmarks [7] as depicted in figure 3(a). To avoid deficits in visual tracking of the landmark, an inertial tracker located at the top of the AR gear aids the tracking process [23]. By means of this hybrid tracking approach, the precise position and orientation of the user T pose can be computed yielding only a very small relative mean distance error of 0.5%.…”
Section: D Vision Sub-systemmentioning
confidence: 99%