2019 International Conference on Robotics and Automation (ICRA) 2019
DOI: 10.1109/icra.2019.8794255
|View full text |Cite
|
Sign up to set email alerts
|

Event-based, Direct Camera Tracking from a Photometric 3D Map using Nonlinear Optimization

Abstract: Event cameras are novel bio-inspired vision sensors that output pixel-level intensity changes, called "events", instead of traditional video images. These asynchronous sensors naturally respond to motion in the scene with very low latency (in the order of microseconds) and have a very high dynamic range. These features, along with a very low power consumption, make event cameras an ideal sensor for fast robot localization and wearable applications, such as AR/VR and gaming. Considering these applications, we p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
44
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
2

Relationship

3
5

Authors

Journals

citations
Cited by 68 publications
(51 citation statements)
references
References 24 publications
0
44
0
Order By: Relevance
“…Note that DVS events are triggered by a change in brightness magnitude (2), not by the brightness derivative (3) exceeding a threshold. The above interpretation may be taken into account to design physically-grounded eventbased algorithms, such as [7], [23], [24], [28], [62], [63], [64], [65], as opposed to algorithms that simply process events as a collection of points with vague photometric meaning. Events are Caused by Moving Edges: Assuming constant illumination, linearizing (2) and using the brightness constancy assumption one can show that events are caused by moving edges.…”
Section: ∆L(x K T K )mentioning
confidence: 99%
“…Note that DVS events are triggered by a change in brightness magnitude (2), not by the brightness derivative (3) exceeding a threshold. The above interpretation may be taken into account to design physically-grounded eventbased algorithms, such as [7], [23], [24], [28], [62], [63], [64], [65], as opposed to algorithms that simply process events as a collection of points with vague photometric meaning. Events are Caused by Moving Edges: Assuming constant illumination, linearizing (2) and using the brightness constancy assumption one can show that events are caused by moving edges.…”
Section: ∆L(x K T K )mentioning
confidence: 99%
“…Many event-based vision datasets have been published since the introduction of the DVS [2]. Most of these datasets were recorded using a DAVIS [51] event camera or similar and have a particular use-case in mind, such as image reconstruction [24], recognition [37,43,52], optical flow [21,42,53], driving/SLAM [26,29,41]. The dataset perhaps most similar to ours is the Event-Camera Dataset and Simulator [28].…”
Section: Related Workmentioning
confidence: 99%
“…Since their introduction, event cameras have spawned a flurry of research. They have been used in feature detection and tracking [3][4][5][6], depth estimation [7][8][9][10], stereo [11][12][13][14], optical flow [15][16][17][18], image reconstruction [19][20][21][22][23][24][25], localization [26][27][28][29], SLAM [30][31][32], visualinertial odometry [33][34][35][36], pattern recognition [37][38][39][40], and more. In response to the growing needs of the community, several important event-based vision datasets have been released, directed at popular topics such as SLAM [28], optical flow [41,42] and recognition [37,43].…”
Section: Introductionmentioning
confidence: 99%
“…From the perspective of the type of motion, constrained motions, such as pure rotation [35], [36], [8], [37] or planar motion [38], [39] have been studied before investigating the most general case of arbitrary 6-DoF motion. Regarding the type of scenes, solutions for artificial patterns, such as high-contrast textures and/or structures (line-based or planar maps) [38], [40], [6], have been proposed before solving more difficult cases: natural scenes with arbitrary 3D structure and photometric variations [36], [7], [9].…”
Section: B Event-based Camera Pose Estimationmentioning
confidence: 99%
“…From the methodology point of view, probabilistic filters [38], [36], [7] provide event-by-event tracking updates thus achieving minimal latency (µs), whereas frame-based techniques (often non-linear optimization) trade off latency for more stable and accurate results [8], [9]. C. Event-based VO and SLAM a) Monocular: Two methods stand out as solving the problem of monocular event-based VO for 6-DoF motions in natural 3D scenes.…”
Section: B Event-based Camera Pose Estimationmentioning
confidence: 99%