Object detection is a critical problem for the safe interaction between autonomous vehicles and road users. Deeplearning methodologies allowed the development of object detection approaches with better performance. However, there is still the challenge to obtain more characteristics from the objects detected in real-time. The main reason is that more information from the environment's objects can improve the autonomous vehicle capacity to face different urban situations. This paper proposes a new approach to detect static and dynamic objects in front of an autonomous vehicle. Our approach can also get other characteristics from the objects detected, like their position, velocity, and heading. We develop our proposal fusing results of the environment's interpretations achieved of YoloV3 and a Bayesian filter. To demonstrate our proposal's performance, we asses it through a benchmark dataset and real-world data obtained from an autonomous platform. We compared the results achieved with another approach.
Localiza on in an unknown environment is one of the major issues faced by autonomous vehicles. The solu on to this problem is delivered by the Simultaneous Localiza on and Mapping techniques, commonly known as SLAM. SLAM is the category of algorithms allowing a robot to map the surroundings and to keep an es mate of its posi on. Nowadays several SLAM methods are widely used. Though, many issues arise when SLAM is applied in a complex and unstructured environment. This ar cle details an implementa on of SLAM using improved Extended Kalman Filter (EKF). The aim is to provide a simple but reliable SLAM technique. The work has been carried out on a robot Seekur Jr, the mapping has been realized with a laser scanner. The applied EKF model with its modifica ons is presented. The techniques used to observe the environment and to iden fy the landmarks are outlined. The robustness and consistency of introduced modificaons were jus fied by experiments.
Testing and validating advanced automotive software is of paramount importance to guarantee safety and quality. While real-world testing is highly demanding and simulation testing is not reliable, we propose a new augmented reality framework that takes advantage of both environments. This new testing methodology is intended to be a bridge between Vehicle-in-the-Loop and real-world testing. It enables to easily and safely place the whole vehicle and all its software, from perception to control, in realistic test conditions. This framework provides a flexible way to introduce any virtual element in the outputs of the sensors of the vehicle under test. For each modality of sensing, the framework requires a real time augmentation function that preserves real sensor data and enhances them with virtual data. The LiDAR data augmentation function is presented together with its implementation details. Relying on both qualitative and quantitative analysis of experimental results, the representability of tests scenes generated by the augmented reality framework is finally proven.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.