2023
DOI: 10.1002/aisy.202300027
|View full text |Cite
|
Sign up to set email alerts
|

T‐ESVO: Improved Event‐Based Stereo Visual Odometry via Adaptive Time‐Surface and Truncated Signed Distance Function

Abstract: The emerging event cameras have the potential to be an excellent complement for standard cameras within various visual tasks, especially in illumination‐changing environments or situations requiring high‐temporal resolution. Herein, an event‐based stereo visual odometry (VO) system via adaptive time‐surface (TS) and truncated signed distance function (TSDF), namely, T‐ESVO, is proposed . The system consists of three carefully designed components, including the event processing unit, the mapping unit, and the t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 50 publications
(123 reference statements)
0
0
0
Order By: Relevance
“…Camera attitude estimation: Colonnier et al (Colonnier et al, 2021) equipped the UAV with DVS, accumulated the pulse flow into images according to the frequency, and tracked the linear features to evaluate the camera attitude. Liu et al (Liu et al, 2023b) used a monocular DAVIS camera to jointly evaluate 3D scene structure, 6-DOF camera pose and scene light intensity for the first time. Murai et al (Murai et al, 2023) used APS images in DAVIS240 to detect corners, and used DVS pulse flow for feature tracking to estimate camera pose.…”
Section: Position Estimation and Visual Odometrymentioning
confidence: 99%
“…Camera attitude estimation: Colonnier et al (Colonnier et al, 2021) equipped the UAV with DVS, accumulated the pulse flow into images according to the frequency, and tracked the linear features to evaluate the camera attitude. Liu et al (Liu et al, 2023b) used a monocular DAVIS camera to jointly evaluate 3D scene structure, 6-DOF camera pose and scene light intensity for the first time. Murai et al (Murai et al, 2023) used APS images in DAVIS240 to detect corners, and used DVS pulse flow for feature tracking to estimate camera pose.…”
Section: Position Estimation and Visual Odometrymentioning
confidence: 99%