“…In this paper, we have discussed different detection approaches, including drivable roads and obstacles, in off-road scenarios. The study of detection analysis is essential to ensure safety, smooth driving, and path planning [ 63 ] in an unknown environment. After reviewing the papers on different detection elements, some common criteria have been found.…”
Section: Discussionmentioning
confidence: 99%
“…In this model, the sensor has to be installed on AGV to consider vehicle movement and speed, as the detection algorithm is based on curvature. This model has been cross-validated on Mississippi State University Autonomous Vehicular Simulator (MAVS) [ 62 , 63 ].…”
Section: Negative Obstacles Detection and Analysismentioning
When it comes to some essential abilities of autonomous ground vehicles (AGV), detection is one of them. In order to safely navigate through any known or unknown environment, AGV must be able to detect important elements on the path. Detection is applicable both on-road and off-road, but they are much different in each environment. The key elements of any environment that AGV must identify are the drivable pathway and whether there are any obstacles around it. Many works have been published focusing on different detection components in various ways. In this paper, a survey of the most recent advancements in AGV detection methods that are intended specifically for the off-road environment has been presented. For this, we divided the literature into three major groups: drivable ground and positive and negative obstacles. Each detection portion has been further divided into multiple categories based on the technology used, for example, single sensor-based, multiple sensor-based, and how the data has been analyzed. Furthermore, it has added critical findings in detection technology, challenges associated with detection and off-road environment, and possible future directions. Authors believe this work will help the reader in finding literature who are doing similar works.
“…In this paper, we have discussed different detection approaches, including drivable roads and obstacles, in off-road scenarios. The study of detection analysis is essential to ensure safety, smooth driving, and path planning [ 63 ] in an unknown environment. After reviewing the papers on different detection elements, some common criteria have been found.…”
Section: Discussionmentioning
confidence: 99%
“…In this model, the sensor has to be installed on AGV to consider vehicle movement and speed, as the detection algorithm is based on curvature. This model has been cross-validated on Mississippi State University Autonomous Vehicular Simulator (MAVS) [ 62 , 63 ].…”
Section: Negative Obstacles Detection and Analysismentioning
When it comes to some essential abilities of autonomous ground vehicles (AGV), detection is one of them. In order to safely navigate through any known or unknown environment, AGV must be able to detect important elements on the path. Detection is applicable both on-road and off-road, but they are much different in each environment. The key elements of any environment that AGV must identify are the drivable pathway and whether there are any obstacles around it. Many works have been published focusing on different detection components in various ways. In this paper, a survey of the most recent advancements in AGV detection methods that are intended specifically for the off-road environment has been presented. For this, we divided the literature into three major groups: drivable ground and positive and negative obstacles. Each detection portion has been further divided into multiple categories based on the technology used, for example, single sensor-based, multiple sensor-based, and how the data has been analyzed. Furthermore, it has added critical findings in detection technology, challenges associated with detection and off-road environment, and possible future directions. Authors believe this work will help the reader in finding literature who are doing similar works.
“…It offers a Python API for crafting customized simulations. Figure 4 presents an example of MAVS simulation environment which is generated for trainig and testing data collection for lane centering [27][28][29][30] Figure 4: The user view of MAVS Simulator…”
Section: Mavsmentioning
confidence: 99%
“…This Level-2 automation technology serves as a crucial stepping stone towards more advanced autonomous driving functionalities. 9 Deep Learning (DL) [10][11][12] has emerged as a potential algorithmic framework, particularly adept at detection and feature extraction tasks, especially when confronted with vast datasets. In the realm of autonomous vehicles, where accurate interpretation of sensory data and rapid decision-making are crucial, DL stands out as a significant tool.…”
Lane centering is a significant feature in the automotive industry and an important feature for advanced or autonomous vehicles, providing assistance to help drivers stay in their lane. The objective of this work is to utilize camera images which are processed by a lane-centering algorithm to make steering decisions. A model with a convolutional neural network (CNN) algorithm is used and simulated datasets are trained and tested. The goal is to make the program learn how to steer the vehicle autonomously and achieve end-to-end learning for steering command generation. The CNN can map raw pixels from cameras directly to steering commands without any intermediate feature engineering. A model is utilized which is created by NVIDIA researchers called the NVIDIA PilotNet. This network comprises five convolutional layers for feature extraction, followed by three fully connected layers for predicting the steering commands. The model is trained using two different sets of data to see how well the model performs with different types of data. The first set comes from Udacity's Self-Driving Car Nanodegree Program, which uses their open-source Vehicle Simulation.Secondly, a dataset from the Mississippi State University Autonomous Vehicular Simulator (MAVS) is used. The training process involves reducing the error between predicted steering angles and the actual steering commands logged by the car. After the model has been trained, it is implemented for testing the car's autonomous capabilities within the Self Driving Car Nanodegree Program Simulation. In this mode, the car demonstrates the ability to effectively track and navigate along the road lanes.
“…In the urban environment, road signs, traffic lights, and pedestrian detection are essential components of AV systems 4 to safely traverse through the unknown environment and path planning. 5 Object detection, feature extraction, and classification are all part of the detection process. 6,7 AV systems use object detection and feature extraction to modify vehicle speed, direction, and behavior.…”
For autonomous driving, pedestrian and road signs detection are key elements. There is much existing literature available addressing this issue successfully. However, the autonomous system requires a large and diverse set of training samples and labeling in real-world environments. Manual annotation of these samples is somewhat challenging and time-consuming. In this paper, our goal is to get better detection accuracy with minimal training data. For this, we have employed the active learning algorithm. Active learning is a useful method that selects only the effective portion of the dataset for training and reduces annotation costs. Though it uses only a small amount of the training data, it provides a high detection accuracy. In this work, we have chosen the deep active learning model for object detection via the probabilistic model of Choi et al. and modified the depth scale of different layers in the backbone. As real-world data may contain noise, motion, or other disruptions, we modified the original model to obtain improved detection results. In this experiment, we create a customized dataset that contains pedestrians, road signs, traffic lights, and zebra (or pedestrian) crossings to deploy the active learning algorithm. The experimental results show that the active learning model can produce good detection outcomes by accurately detecting and classifying pedestrians, road signs, traffic light, and zebra (or pedestrian) crossings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.