Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2022
DOI: 10.3390/s22030773
|View full text |Cite
|
Sign up to set email alerts
|

Accuracy and Speed Improvement of Event Camera Motion Estimation Using a Bird’s-Eye View Transformation

Abstract: Event cameras are bio-inspired sensors that have a high dynamic range and temporal resolution. This property enables motion estimation from textures with repeating patterns, which is difficult to achieve with RGB cameras. Therefore, motion estimation of an event camera is expected to be applied to vehicle position estimation. An existing method, called contrast maximization, is one of the methods that can be used for event camera motion estimation by capturing road surfaces. However, contrast maximization tend… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(9 citation statements)
references
References 32 publications
0
6
0
Order By: Relevance
“…At first look, it may appear that event collapse occurs when the number of DOFs in the warp becomes large enough, i.e., for complex motions. Event collapse has been reported in homographic motions (8 DOFs) [ 27 , 31 ] and in dense optical flow estimation [ 16 ], where an artificial neural network (ANN) predicts a flow field with DOFs ( pixels), whereas it does not occur in feature flow (2 DOFs) or rotational motion flow (3 DOFs). However, a more careful analysis reveals that this is not the entire story because event collapse may occur even in the case of 1 DOF, as we show.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…At first look, it may appear that event collapse occurs when the number of DOFs in the warp becomes large enough, i.e., for complex motions. Event collapse has been reported in homographic motions (8 DOFs) [ 27 , 31 ] and in dense optical flow estimation [ 16 ], where an artificial neural network (ANN) predicts a flow field with DOFs ( pixels), whereas it does not occur in feature flow (2 DOFs) or rotational motion flow (3 DOFs). However, a more careful analysis reveals that this is not the entire story because event collapse may occur even in the case of 1 DOF, as we show.…”
Section: Related Workmentioning
confidence: 99%
“…How did previous works tackle event collapse? Previous works have tackled the issue in several ways, such as: (i) initializing the parameters sufficiently close to the desired solution (in the basin of attraction of the local optimum) [ 12 ]; (ii) reformulating the problem, changing the parameter space to reduce the number of DOFs and increase the well-posedness of the problem [ 14 , 31 ]; (iii) providing additional data, such as depth [ 27 ], thus changing the problem from motion estimation given only events to motion estimation given events and additional sensor data; (iv) whitening the warped events before computing the objective [ 27 ]; and (v) redesigning the objective function and possibly adding a strong classical regularizer (e.g., Charbonnier loss) [ 10 , 16 ]. Many of the above mitigation strategies are task-specific because it may not always be possible to consider additional data or reparametrize the estimation problem.…”
Section: Related Workmentioning
confidence: 99%
“…Due to its unique way of working, the event camera has advantages that traditional cameras do not have, such as low latency, high dynamic range (HDR), high temporal resolution, and low power consumption. Therefore, the event camera is suitable for extreme situations such as high-speed motion and large changes in lighting conditions, making it a research hotspot in robotics and computer vision [ 4 , 5 , 6 , 7 , 8 ].…”
Section: Introductionmentioning
confidence: 99%
“…This can be extremely useful in video applications such as frame rate up conversion, global motion estimation, and video compression. A general optical flow technique depends on the intensity of material points from an image series remaining constant (Ozawa et al, 2022; Sun et al, 2018)—that is, the intensity invariant constraint—but often speckle noise and tissue deformation introduce complexity to motion estimation (Jo et al, 2018). Other methods for motion estimation are the block‐matching (BM) technique, which measures the motion vector (MV) by estimating the similarity condition as the mean absolute error (MAE), mean square error (MSE), and sum of absolute differences (SAD) among the equivalent blocks of two sequential images (Tian, 2021; Wang et al, 2021; Wu et al, 2021).…”
Section: Introductionmentioning
confidence: 99%