The Driving Assistance Systems (DAS) aim to help the vehicle drivers to proceed through different road situations. However, their main task is not only to safe one particular driver, but also to increase the safety for all traffic members. The problem domain here is huge and might be divided into the subtopics, like driver's fatigue detection, pedestrian tracking, obstacle collision avoidance, lane departure warnings and traffic signs detection and recognition. Advanced computer vision techniques are widely used in order to develop sufficient and robust systems for driving assistance. In this paper we discuss the video-based Hough-Transform driven objects detection algorithms and their applications for lane departure warnings, as well as for traffic signs detection. Furthermore, a high-speed hardware implementation of these algorithms on the FPGA/ASIC is also presented
The Vision Assistant is designed as an intelligent tool to assist people with different disabilities. The goal of this project is to replace the mouse and keyboard by an adaptive eye-tracker system (so called mouseless cursor) which Using a camera, it works with a pattern recognition algorithm based on a Hough-transform core to process the streaming image sequences. This technique is known for its performance in locating given shapes. In particular, it is used to extract the shapes that relate to the human eye and analyze them in real-time with the purpose of getting the position of an eye in an incoming image and interpreting it as the reference position of a mouse cursor on the user's monitor. The possibility of the Hough transform parallelization and its execution on the Hubel-Wiesel Neural Network for ultra fast eye-tracking is also discussed in this paper. The results of several experiments in this paper proved that the system performs quite well with different colours of the subjects' eyes as well as under different lighting conditions. In the conclusion we paid attention to the problems of further improvement of the functional and algorithmic parts of the Vision Assistant.
Even in significant light intensity fluctuations human beings still can sharply perceive the surrounding world under various light conditions: from starlight to sunlight. This process starts in the retina, a tiny tissue of a quarter of a millimeter thick. Based on retinal processing principles, a bio-inspired computational model for online contrast adaptation is presented. The proposed method is developed with the help of the fuzzy theory and corresponds to the models of the retinal layers, their interconnections and intercommunications, which have been described by neurobiologists. The retinal model has been coupled in the successive stage with the Hough transformation in order to create a robust lane marks detection system. The performance of the system has been evaluated with the number of test sets and showed good results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.