The SLAM problem for autonomous robots can be greatly improved by using event-based cameras. Compared to others, event-based cameras consume very low power while providing great temporal resolution and dynamic range. In this study, we propose a convolutional neural SLAM framework based solely on the event data. Event-based cameras generate events for pixels whose brightness changes. Therefore, the event data is rich in motion and edge information. The purpose of the proposed framework is to make all estimations using encoded information in event data. The proposed solution is in the form of keyframe-based visual SLAM, consisting of three neural networks that can estimate the relative camera pose, log-depth and features for loop closure detection. In the study, network architectures and learning curves for the trained networks are presented and it is shown that networks can learn the problems successfully. The proposed method has been developed and tested on a new dataset generated by the CARLA simulator. It has been shown that the proposed method is a SLAM solution and it can keep global drift under control with loop closure estimations. Evaluation metrics for estimations, evaluation of the global model and an analysis of run-time performance are also presented. Event-based cameras, convolutional neural networks, neural slam, visual slam.
INDEX TERMS