Abstract:Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, s… Show more
“…Many of these approaches use monocular vision for this task. An example is the work in [ 9 ], where lines painted on the road are detected by a single monocular camera, and an automatic steering control, speed assistance for the driver and localization of the vehicle are presented. In [ 10 ], the authors go one step further, trying to predict pedestrian behavior based on the Gaussian process, dynamical models and probabilistic hierarchical trajectory matching.…”
The stixel world is a simplification of the world in which obstacles are represented as vertical instances, called stixels, standing on a surface assumed to be planar. In this paper, previous approaches for stixel tracking are extended using a two-level scheme. In the first level, stixels are tracked by matching them between frames using a bipartite graph in which edges represent a matching cost function. Then, stixels are clustered into sets representing objects in the environment. These objects are matched based on the number of stixels paired inside them. Furthermore, a faster, but less accurate approach is proposed in which only the second level is used. Several configurations of our method are compared to an existing state-of-the-art approach to show how our methodology outperforms it in several areas, including an improvement in the quality of the depth reconstruction.
“…Many of these approaches use monocular vision for this task. An example is the work in [ 9 ], where lines painted on the road are detected by a single monocular camera, and an automatic steering control, speed assistance for the driver and localization of the vehicle are presented. In [ 10 ], the authors go one step further, trying to predict pedestrian behavior based on the Gaussian process, dynamical models and probabilistic hierarchical trajectory matching.…”
The stixel world is a simplification of the world in which obstacles are represented as vertical instances, called stixels, standing on a surface assumed to be planar. In this paper, previous approaches for stixel tracking are extended using a two-level scheme. In the first level, stixels are tracked by matching them between frames using a bipartite graph in which edges represent a matching cost function. Then, stixels are clustered into sets representing objects in the environment. These objects are matched based on the number of stixels paired inside them. Furthermore, a faster, but less accurate approach is proposed in which only the second level is used. Several configurations of our method are compared to an existing state-of-the-art approach to show how our methodology outperforms it in several areas, including an improvement in the quality of the depth reconstruction.
“…The primary weakness of GNSS stems from the system’s vulnerability to radio frequency interference [20,21,22,23,24,25,26,27,28] and ionospheric effects [29,30,31,32]. The performance of the vision sensor [33,34] can be impeded by environmental factors such as light and weather conditions [35,36,37]. Because of these factors, detecting a driving lane is not a simple task for autonomous vehicles.…”
Curb detection and localization systems constitute an important aspect of environmental recognition systems of autonomous driving vehicles. This is because detecting curbs can provide information about the boundary of a road, which can be used as a safety system to prevent unexpected intrusions into pedestrian walkways. Moreover, curb detection and localization systems enable the autonomous vehicle to recognize the surrounding environment and the lane in which the vehicle is driving. Most existing curb detection and localization systems use multichannel light detection and ranging (lidar) as a primary sensor. However, although lidar demonstrates high performance, it is too expensive to be used for commercial vehicles. In this paper, we use ultrasonic sensors to implement a practical, low-cost curb detection and localization system. To compensate for the relatively lower performance of ultrasonic sensors as compared to other higher-cost sensors, we used multiple ultrasonic sensors and applied a series of novel processing algorithms that overcome the limitations of a single ultrasonic sensor and conventional algorithms. The proposed algorithms consisted of a ground reflection elimination filter, a measurement reliability calculation, and distance estimation algorithms corresponding to the reliability of the obtained measurements. The performance of the proposed processing algorithms was demonstrated by a field test under four representative curb scenarios. The availability of reliable distance estimates from the proposed methods with three ultrasonic sensors was significantly higher than that from the other methods, e.g., 92.08% vs. 66.34%, when the test vehicle passed a trapezoidal-shaped road shoulder. When four ultrasonic sensors were used, 96.04% availability and 13.50 cm accuracy (root mean square error) were achieved.
“…Most of the above control strategies do not take into account the time delays induced by sensors, which has a large impact on the quality and stability of lateral control. Vision-based sensors, such as monocular cameras, are widely used in lane detecting or vehicle localization due to their low cost, and the vehicle-lane information can be obtained reliably through visual algorithms [ 17 , 18 , 19 , 20 , 21 , 22 ]. However, the computational cost of the visual algorithm is relatively large.…”
Vision-based sensors are widely used in lateral control of autonomous vehicles, but the large computational cost of the visual algorithms often induces uneven time delays. In this paper, a hierarchical vision-based lateral control scheme is proposed, where the upper controller is designed by robust H∞-based linear quadratic regulator (LQR) algorithm to compensate sensor-induced delays, and the lower controller is based on logic threshold method, in order to achieve strong convergence of the steering angle. Firstly, the vehicle lateral model is built, and the nonlinear uncertainties induced by time delays are linearized with Taylor expansion. Secondly, the state space of the system is augmented to describe such uncertainties with polytopic inclusions, which is controlled by an H∞-based LQR controller with a low cost of online computation. Then, a lower controller is designed for the control of the steering motor. According to the results of the vehicle experiment as well as the hardware-in-the-loop (HIL) experiment, the proposed control scheme shows good performance in vehicle’s lateral control task, and exhibits better robustness compared with a conventional LQR controller. The proposed control scheme provides a feasible solution for the lateral control of autonomous driving.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.