IEEE/ION Position, Location and Navigation Symposium 2010
DOI: 10.1109/plans.2010.5507322
|View full text |Cite
|
Sign up to set email alerts
|

Integration of GPS and vision measurements for navigation in GPS challenged environments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
35
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 37 publications
(36 citation statements)
references
References 2 publications
0
35
0
Order By: Relevance
“…While the cameras are low-priced, the precision of relative positioning using the images is high as compared with other navigation sensors. Furthermore, we can easily obtain image sequences during driving, hybrid approaches using images for positioning also seem to be promising (Soloviev and Venable, 2010;Kim et al, 2011;Yoo et al, 2005, Kim et al, 2004Goldshtein et al, 2007). Single camera or more cameras' images are used with GPS data to enhance the positioning accuracy in many studies.…”
Section: Indrodouctionmentioning
confidence: 99%
See 1 more Smart Citation
“…While the cameras are low-priced, the precision of relative positioning using the images is high as compared with other navigation sensors. Furthermore, we can easily obtain image sequences during driving, hybrid approaches using images for positioning also seem to be promising (Soloviev and Venable, 2010;Kim et al, 2011;Yoo et al, 2005, Kim et al, 2004Goldshtein et al, 2007). Single camera or more cameras' images are used with GPS data to enhance the positioning accuracy in many studies.…”
Section: Indrodouctionmentioning
confidence: 99%
“…They performed the fusion the IMU and camera in a tightly coupled manner by an error state extended Kalman filter. Soloviev and Venable (2010) investigated into the feasibility of the combination of GPS and a single video camera for navigation in GPS-challenged environments including tunnel or skyscraper area where GPS signal blockage occurs. They also demonstrated the performance of the method using only simulated data.…”
Section: Indrodouctionmentioning
confidence: 99%
“…The fusion is performed in two equations defining the relationship between the device range to features and its pose change and one equation expressing the relationship between the carrier phase and position changes. In [69], these equations are resolved using the least mean square estimate. The experimental results carried out in [69] show that the GPS The fusion of GNSS and monocular SLAM within a BA framework is addressed in [70], through a BA with inequality constraint (IBA).…”
Section: Gnss and Camera Fusionmentioning
confidence: 99%
“…In [69], these equations are resolved using the least mean square estimate. The experimental results carried out in [69] show that the GPS The fusion of GNSS and monocular SLAM within a BA framework is addressed in [70], through a BA with inequality constraint (IBA). The reprojection error is computed based on both the camera and GPS poses.…”
Section: Gnss and Camera Fusionmentioning
confidence: 99%
“…Much prior work in visual SLAM has focused on either eliminating the scale ambiguity, through the inclusion of inertial measurements [1], [2] or GPS carrier-phase measurements [3], or employing previously-mapped visually recognizable markers, referred to as fiduciary markers [4]. In contrast, there has been little prior work that attempts to solve the problem of anchoring the local navigation solution produced by visual SLAM to a global reference frame without the use of an a priori map of the environment, even though the no-prior-maptechnique is preferred or required for many applications.…”
Section: Introductionmentioning
confidence: 99%