Segmentation of the scene is a fundamental component in computer vision to find regions of interest. Most systems that aspire to run in real-time use a fast segmentation stage that considers the whole image, and then a more costly stage for classification. In this paper we present a novel approach to segment moving objects from images taken with a moving camera. The segmentation algorithm is based on a special representation of optical flow, on which u-disparity is applied. The u-disparity is used to indirectly find and mask out the background flow in the image, by approximating it with a quadratic function. Robustness in the optical Row calculation is achieved by contrast content filtering.The algorithm successfully segments moving pedestrians from a moving vehicle with few false positive segments. Most false positive segments are due to poles and organic structures, such as trees. Such false positives are, however, easily rejected in a classification stage. The presented segmentation algorithm is intended to be used as a component in a detectiodclassification framework.
In this paper we introduce the Smart Cars Project at the Australian National University/National ICT Australia, together with a discussion and an example of a driver assistance system. We present a framework for interactive driver assistance systems that includes techniques for fast-speed sign detection and classification, obstacle detection and tracking applied to pedestrian detection, and lane departure warning. In addition, the driver’s actions are monitored. The integrated system uses information extracted from the road scene (speed signs, position within the lane, relative position to other cars, etc.) together with information about the driver’s state, such as eye gaze and head pose, to issue adequate warnings. A touch screen monitor is used to present relevant information and allow the driver to interact with the system. The research is focused around robust algorithms that are able to run on-line. Results of on-line speed sign detection and pedestrian detection are presented in the context of a driver assistance system.
Fusion of information from different complementary sources may be necessary to achieve a robust sensing system that degrades gracefully under various conditions. Many approaches use a specific tailor-made combination of algorithms that do not easily allow the inclusion of more, or other, types of algorithms. In this paper, we explore a variant of a generic algorithm for fusing visual cues to the task of object segmentation in a video stream. The fusion algorithm combines the output of several segmentation algorithms in a straight forward way by using a bayesian approach and a particle filter to track several hypotheses. Segmentation algorithms can be added or removed without changing the over all structure of the system. It was of particular interest to investigate if the method was suitable when realistic real-world scenes with much noise was analysed. The system has been tested on image sequences taken from a moving vehicle where stationary and moving objects are successfully segmented from the background. In conclusion, the fusion algorithm explored is well suited to this problem domain and is easily adopted. The context of this work is on-line pedestrian detection to be deployed in cars.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.