We demonstrate a working functional prototype of a cooperative perception system that maintains a real-time digital twin of the traffic environment, providing a more accurate and more reliable model than any of the participant subsystems—in this case, smart vehicles and infrastructure stations—would manage individually. The importance of such technology is that it can facilitate a spectrum of new derivative services, including cloud-assisted and cloud-controlled ADAS functions, dynamic map generation with analytics for traffic control and road infrastructure monitoring, a digital framework for operating vehicle testing grounds, logistics facilities, etc. In this paper, we constrain our discussion on the viability of the core concept and implement a system that provides a single service: the live visualization of our digital twin in a 3D simulation, which instantly and reliably matches the state of the real-world environment and showcases the advantages of real-time fusion of sensory data from various traffic participants. We envision this prototype system as part of a larger network of local information processing and integration nodes, i.e., the logically centralized digital twin is maintained in a physically distributed edge cloud.
The use of high-quality data is required to complete the job of lateral control utilizing Behavioral Cloning (BC) through an End-to-End (E2E) learning system. The majority of E2E learning systems gather this high-quality data all at once before beginning the training phase (i.e., the training process does not start until the end of the data collection process). The demand for high-quality data necessitates a large amount of human effort and substantial time and money spent waiting for data collection to be completed. As a result, it is critical to find a viable option to reduce both the time and cost of data collecting while also maintaining the performance of a trained vehicle controller. This paper offers a novel behavioral cloning approach for lateral vehicle control to address the aforementioned problems. The proposed technique begins by collecting the least amount of human driving data possible. The data from human drivers are utilized for training a convolutional neural network for lateral control. The trained neural network is subsequently deployed to the vehicle's automated driving controller, replacing a human driver. At this point, a human driver is out of the loop, and an automated driving controller, trained by the initial data from a human driver, drives the vehicle to collect further training data. The driving data obtained are sent into a convolutional neural network training module, then the newly trained neural network is deployed to the automated driving controller that will drive the vehicle further. The data collection alternates neural network training processes using the collected data until the neural network learns to correctly associate an image input with a steering angle. The proposed incremental approach was extensively tested in simulated environments, and the results are promising, only 3.81% (1,061 out of 27,884) of the total data came from a human driver. The incrementally trained neural networks using data collected by automated controllers were able to drive the vehicle in two different tracks successfully. The AI chauffeur was able to drive the vehicle on Track B for more than 70% of the track even though it has not seen the track before.
Vision-based autonomous driving is rapidly growing. There are, however, presently no agreed-upon metrics for assessing how well deep neural network (DNN) models perform in driving. To compare novel approaches and architectures to existing ones, some researchers employed a mean error between labeled and predicted values in a test dataset and others presented a new metric that is designed to match their requirements. The discrepancy in the usage of various performance metrics and lack of objective metrics to judge the driving performance were our primary motives for developing a feasible solution. In this study, we propose online performance evaluation metrics index (OPEMI), an integrated metric that can evaluate the driving capabilities of autonomous driving models in various driving scenarios. To evaluate driving performance precisely and objectively, OPEMI incorporates several variables, including driving control stability, driving trajectory stability, journey duration, travel distance, success rate, and speed. To demonstrate the validity of OPEMI, we first confirmed that the prediction accuracy has a weak correlation with driving performance. Then, we have discussed the constraints in the existing driving performance metrics in certain circumstances, and their failure to assess the driving models. Finally, we conducted experiments with four popular DNN models and two in-house models under three different driving scenarios (generic, urban, and racing). The results show that the proposed evaluation metric, OPEMI, realistically displays driving performance and demonstrates its validity in various driving scenarios.
For autonomous driving research, using a scaled vehicle platform is a viable alternative compared to a full-scale vehicle. However, using embedded solutions such as small robotic platforms with differential driving or radio-controlled (RC) car-based platforms can be limiting on, for example, sensor package restrictions or computing challenges. Furthermore, for a given controller, specialized expertise and abilities are necessary. To address such problems, this paper proposes a feasible solution, the Ridon vehicle, which is a spacious ride-on automobile with high-driving electric power and a custom-designed drive-by-wire system powered by a full-scale machine-learning-ready computer. The major objective of this paper is to provide a thorough and appropriate method for constructing a cost-effective platform with a drive-by-wire system and sensor packages so that machine-learning-based algorithms can be tested and deployed on a scaled vehicle. The proposed platform employs a modular and hierarchical software architecture, with microcontroller programs handling the low-level motor controls and a graphics processing unit (GPU)-powered laptop computer processing the higher and more sophisticated algorithms. The Ridon vehicle platform is validated by employing it in a deep-learning-based behavioral cloning study. The suggested platform’s affordability and adaptability would benefit broader research and the education community.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.