This paper presents a novel method for video-based traffic state detection on motorways performed on smart cameras. Camera calibration parameters are obtained from the known length of lane markings. Mean traffic speed is estimated from Kanade-Lucas-Tomasi (KLT) optical flow method using a robust outlier detection. Traffic density is estimated using a robust statistical counting method. Our method has been implemented on an embedded smart camera and evaluated under different road and illuminationconditions. It achieves a detection rate of more than 95% for stationary traffic.
We demonstrate our novel video-based real-time traffic event notification and verification system LOOK2. It generates fast and reliable traffic information about relevant traffic state and road conditions changes on observed roads. It utilizes installed road-side sensors providing low-level traffic and environmental data, as well as video sensors which gain high-level traffic information from live video analysis. Spatiotemporal data fusion is applied on all available traffic and environmental data to gain reliable traffic information. This traffic information is published by a DATEXII compliant web service to a web-based traffic desk application. Road network and traffic channel operators receive real-time and relevant traffic event notifications by using this application. The system also enables a visual verification of the notified situations.LOOK2 is a video-based traffic event notification system for road network and traffic channel operators. It generates fast and reliable traffic information about relevant changes of the traffic state and road conditions in real-time. Therefore, it makes use of installed, common road-side sensors providing low-level traffic and environmental measurement data, as well as video sensors which gain high-level traffic information from live video analysis. The live stream analysis is done either in the compressed video domain as added value to simple, installed surveillance cameras, or in the uncompressed video domain on smart cameras. Figure 1 illustrates the applied method for estimating the mean speed (Figure 1(a)) and traffic density (Figure 1(b)) for individual lanes in the uncompressed video domain. (a) Motion vector extraction (b) Occupancy computationFigure 1. Traffic speed and density estimation. For the compressed video domain, a feature-based traffic state estimation method is applied. The method performs statistical computations on motion vectors and applies supervised learning to estimate the prevailing traffic state. The gained high-level traffic states are spatio-temporally fused with all available low-level measurements of installed sensors on the roads. The fusion results are then published by a DATEX II compliant service to a web-based traffic desk application. With this application, traffic operators and editors are notified about relevant traffic state and road condition changes on the monitored roads in real-time. A direct relation of published events with available traffic cameras enables for an instant event verification (Figure 2).Figure 2. Real-time traffic state verification with camera live streams.The LOOK2 system has been developed together with ASFINAG -the Austrian operator of motorways and expressways -and has been tested by traffic editors in the production environment for several months.ACKNOWLEDGMENT
At the Industrial Surveillance Day, ASFINAG and the Alpen Adria Universitt Klagenfurt (in particular the Institute of Information Technology and the Institute of Networked and Embedded Systems) demonstrate a show case of their video-based level of service (LOS) classification for smart cameras. This LOS classification system has been developed in a joint Lakeside Labs project in Klagenfurt, Austria. It is part of a case study which aims at improving the quality of traffic messages for the two particular traffic situations level-of-service (LOS) and weather-related road conditions (WRRC) on two dedicated test tracks on Austrian motorways. Using a live connection to a smart camera at one of these test tracks, we plan to show a live demonstration for visual speed estimation and LOS classification. This demo is coordinated with our partner SLR Engineering, which provided the smart cameras for the case study.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.