2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops 2012
DOI: 10.1109/cvprw.2012.6239189
|View full text |Cite
|
Sign up to set email alerts
|

Wearable omnidirectional vision system for personal localization and guidance

Abstract: Autonomous navigation and recognition of the environment are fundamental abilities for people extensively studied in computer vision and robotics fields. Expansion of low cost wearable sensing provides interesting opportunities for assistance systems that augment people navigation and recognition capabilities. This work presents our wearable omnidirectional vision system and a novel twophase localization approach running on it. It runs stateof-the-art real time visual odometry adapted to catadioptric images au… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 24 publications
(11 citation statements)
references
References 29 publications
(33 reference statements)
0
11
0
Order By: Relevance
“…Whereas, in [1], the common scene observed by the wearer and a surveillance camera has been used Poster Session F1: Deep Learning for Multimedia MM '20, October 12-16, 2020, Seattle, WA, USA to identify the wearer. Other works compute the location of the wearer directly [7,15] or indirectly (using gaze, social interactions, etc.) [16,17], which is then used to identify the wearer.…”
Section: Related Workmentioning
confidence: 99%
“…Whereas, in [1], the common scene observed by the wearer and a surveillance camera has been used Poster Session F1: Deep Learning for Multimedia MM '20, October 12-16, 2020, Seattle, WA, USA to identify the wearer. Other works compute the location of the wearer directly [7,15] or indirectly (using gaze, social interactions, etc.) [16,17], which is then used to identify the wearer.…”
Section: Related Workmentioning
confidence: 99%
“…Their method is based on linearisation of a catadioptric camera model and they perform tracking with omnidirectional patch descriptors that are rotation and scale invariant. Murillo et al [27] apply the same approach to data that is similar to that in our stabilisation application, namely first person video captured by a wearable omnidirectional camera. Torii et al [36] build 3D city models from Google Street View imagery.…”
Section: Related Workmentioning
confidence: 99%
“…6) corrects the scale drift of the raw visual odometry. In the last experiment we test our approach in an indoor environment, with normal gait not set by a metronome [16]. Fig.…”
Section: Framementioning
confidence: 99%