2011 IEEE Workshop on Applications of Computer Vision (WACV) 2011
DOI: 10.1109/wacv.2011.5711479
|View full text |Cite
|
Sign up to set email alerts
|

Multisensory embedded pose estimation

Abstract: We present a multisensory method for estimating the transformation of a mobile phone between two images taken from its camera. Pose estimation is a necessary step for applications such as 3D reconstruction and panorama construction, but detecting and matching robust features can be computationally expensive. In this paper we propose a method for combining the inertial sensors (accelerometers and gyroscopes) of a mobile phone with its camera to provide a fast and accurate pose estimation.We use the inertial bas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2011
2011
2015
2015

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 19 publications
(11 reference statements)
0
8
0
Order By: Relevance
“…The methods described in [13,28,57,58] first projectively rectify the whole image and then detect invariant features on the normalized result, while the DARP method does the opposite. In addition, [57] is designed for offline 3D reconstruction, [13,28,58] target only planar scenes and [13,28] require an inertial sensor.…”
Section: Textured Object Detectionmentioning
confidence: 99%
See 2 more Smart Citations
“…The methods described in [13,28,57,58] first projectively rectify the whole image and then detect invariant features on the normalized result, while the DARP method does the opposite. In addition, [57] is designed for offline 3D reconstruction, [13,28,58] target only planar scenes and [13,28] require an inertial sensor.…”
Section: Textured Object Detectionmentioning
confidence: 99%
“…In addition, [57] is designed for offline 3D reconstruction, [13,28,58] target only planar scenes and [13,28] require an inertial sensor.…”
Section: Textured Object Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…working independently at separate time slots, or prioritizing computer vision and relying on INS when the cameras fail to deliver information) are described in [19] and references therein. In this sense, recent examples include navigation systems integrating cameras, gyroscopes and accelerometers, combining the data with an extended Kalman filter [20], an unscented Kalman filter [21], or the employment of Bayesian segmentation to detect a moving person (with "Mixtures of Gaussians" for background modeling), and a particle filter to track the person in the scene [22].…”
Section: B Integration Of Computer Vision With Accelerometrymentioning
confidence: 99%
“…The methods described in (Eyjolfsdottir and Turk, 2011), (Kurz and Benhimane, 2011), (Wu et al, 2008) and (Yang et al, 2010) first projectively rectify the whole image and then detect invariant features on the normalized result, while the DARP method does the opposite. In addition, (Wu et al, 2008) is designed for offline 3D reconstruction, (Eyjolfsdottir and Turk, 2011), (Kurz and Benhimane, 2011) and (Yang et al, 2010) target only planar scenes and (Eyjolfsdottir and Turk, 2011) and (Kurz and Benhimane, 2011) require an inertial sensor. Concurrent with this research (Marcon et al, 2012) used an RGB-D sensor to perform patch rectification using PCA, followed by 2D Fourier-Mellin Transform for description.…”
Section: Introductionmentioning
confidence: 99%