Hundreds of people are injured or killed in road accidents. These accidents are caused by several intrinsic and extrinsic factors, including the attentiveness of the driver towards the road and its associated features. These features include approaching vehicles, pedestrians, and static fixtures, such as road lanes and traffic signs. If a driver is made aware of these features in a timely manner, a huge chunk of these accidents can be avoided. This study proposes a computer vision-based solution for detecting and recognizing traffic types and signs to help drivers pave the door for self-driving cars. A real-world roadside dataset was collected under varying lighting and road conditions, and individual frames were annotated. Two deep learning models, YOLOv7 and Faster RCNN, were trained on this custom-collected dataset to detect the aforementioned road features. The models produced mean Average Precision (mAP) scores of 87.20% and 75.64%, respectively, along with class accuracies of over 98.80%; all of these were state-of-the-art. The proposed model provides an excellent benchmark to build on to help improve traffic situations and enable future technological advances, such as Advance Driver Assistance System (ADAS) and self-driving cars.
Indoor mobile robots have gone hand-in-hand with the automation of factories, households, and commercial spaces. They are present in all shapes and sizes, from humanoids used as waiters and personal assistants to box-like warehouse item-sorters. Working of these robots depends on several key elements like localization, mapping, and surroundings detection. They also need to accurately detect walls and measure the distance to them to be able to avoid collisions. Currently, this task is accomplished using LiDARs and other optical sensors which are costly, and this can be catered for by using a vision-based solution. The main hurdle for vision-based systems is that walls do not have any distinguishable visual features to detect them. This paper proposes a system for measuring the distance to the wall by detecting the wall-floor edge. BRISK key-points are extracted on the detected edge, and pixel counting is then used to calculate the distance. The accuracy of the calculated distance is 95.58% and serves as a positive motivation for further development and refining of the proposed system. This paper implements the system on a single image which can be expanded to incorporate support for implementation on live video stream in the future.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.