Please refer to published version for the most recent bibliographic citation information. If a published version is known of, the repository item page linked to above, will contain details on accessing it.
Please refer to published version for the most recent bibliographic citation information. If a published version is known of, the repository item page linked to above, will contain details on accessing it.
Video compression in automated vehicles and advanced driving assistance systems is of utmost importance to deal with the challenge of transmitting and processing the vast amount of video data generated per second by the sensor suite which is needed to support robust situational awareness.
The objective of this paper is to demonstrate that video compression can be optimised based on the perception system that will utilise the data. We have considered the deployment of deep neural networks to implement object (i.e. vehicle) detection based on compressed video camera data extracted
from the KITTI MoSeg dataset. Preliminary results indicate that re-training the neural network with M-JPEG compressed videos can improve the detection performance with compressed and uncompressed transmitted data, improving recalls and precision by up to 4% with respect to re-training with
uncompressed data.
The global Connected and Autonomous Mobility industry is growing at a rapid pace. To ensure the successful adoption of connected automated mobility solutions, their safety, reliability and hence the public acceptance are paramount. It is widely known that in order to demonstrate that L3+ automated systems are safer with respect to human drivers, upwards of several millions of miles need to be driven. The only way to efficiently achieve this amount of tests in a timely manner is by using simulations and high fidelity virtual environments. Two key components of being able to test an automated system in a synthetic environment are validated sensor models and noise models for each sensor technology. In fact, the sensors are the element feeding information into the system in order to enable it to safely plan the trajectory and navigate. In this paper, we propose an innovative real-time LiDAR sensor model based on beam propagation and a probabilistic rain model, taking into account raindrop distribution and size. The model can seamlessly run in real-time, synchronised with the visual rendering, in immersive driving simulators, such as the WMG 3xD simulator. The models are developed using Unreal engine, therefore demonstrating that gaming technology can be merged with the Automated Vehicles (AVs) simulation toolchain for the creation and visualization of high fidelity scenarios and for AV accurate testing. This work can be extended to add more sensors and more noise factors or cyberattacks in real-time simulations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.