The objective of this work is the instantaneous computation of Time-to-Collision (T T C) for potential collision only from the motion information captured with a vehicle borne camera. The contribution is the detection of dangerous events and degree directly from motion divergence in the driving video, which is also a clue used by human drivers. Both horizontal and vertical motion divergence are analyzed simultaneously in several collision sensitive zones. The video data are condensed to the motion profiles both horizontally and vertically in the lower half of the video to show motion trajectories directly as edge traces. Stable motion traces of linear feature components are obtained through filtering in the motion profiles. As a result, this avoids object recognition and sophisticated depth sensing in prior. The fine velocity computation yields reasonable T T C accuracy so that the video camera can achieve collision avoidance alone from the size changes of visual patterns. We have tested the algorithm for various roads, environments, and traffic, and shown results by visualization in the motion profiles for overall evaluation.M. Kilicarslan was with the
In vision-based autonomous driving, understanding spatial layout of road and traffic is required at each moment. This involves the detection of road, vehicle, pedestrian, etc. in images. In driving video, the spatial positions of various patterns are further tracked for their motion. This spatial-to-temporal approach inherently demands a large computational resource. In this work, however, we take a temporal-to-spatial approach to cope with fast moving vehicles in autonomous navigation. We sample one-pixel line at each frame in driving video, and the temporal congregation of lines from consecutive frames forms a road profile image. The temporal connection of lines also provides layout information of road and surrounding environment. This method reduces the processing data to a fraction of video in order to catch up vehicle moving speed. The key issue now is to know different regions in the road profile; the road profile is divided in real time to road, roadside, lane mark, vehicle, etc. as well as motion events such as stopping and turning of ego-vehicle. We show in this paper that the road profile can be learned through Semantic Segmentation. We use RGB-F images of the road profile to implement Semantic Segmentation to grasp both individual regions and their spatial relations on road effectively. We have tested our method on naturalistic driving video and the results are promising.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.