2012 IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance 2012
DOI: 10.1109/avss.2012.63
|View full text |Cite
|
Sign up to set email alerts
|

Robust Traffic State Estimation on Smart Cameras

Abstract: This paper presents a novel method for video-based traffic state detection on motorways performed on smart cameras. Camera calibration parameters are obtained from the known length of lane markings. Mean traffic speed is estimated from Kanade-Lucas-Tomasi (KLT) optical flow method using a robust outlier detection. Traffic density is estimated using a robust statistical counting method. Our method has been implemented on an embedded smart camera and evaluated under different road and illuminationconditions. It … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2012
2012
2016
2016

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 14 publications
0
5
0
Order By: Relevance
“…Therefore, the workload reflects the arrival rate of the vehicles. We used real workloads for the learning algorithm by taking several test recordings on different highways, where we recorded the interarrival times of vehicles by a vehicle detection algorithm [Pletzer et al 2012]. Figure 5 shows the interarrival times distribution (spaced by an interval of 5 seconds) of four different test recordings on different locations.…”
Section: Workload Estimationmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, the workload reflects the arrival rate of the vehicles. We used real workloads for the learning algorithm by taking several test recordings on different highways, where we recorded the interarrival times of vehicles by a vehicle detection algorithm [Pletzer et al 2012]. Figure 5 shows the interarrival times distribution (spaced by an interval of 5 seconds) of four different test recordings on different locations.…”
Section: Workload Estimationmentioning
confidence: 99%
“…The motivation of this research is based on our previous work on traffic monitoring [Pletzer et al 2012;Bischof et al 2010] and the development of a mobile, multi-camera traffic surveillance system [Khan et al 2011]. In contrast to most of the existing traffic surveillance systems which are mostly based on fixed installations and large sensors, our portable platform can be easily deployed and used for various monitoring tasks, including law enforcement and construction site monitoring.…”
Section: Introductionmentioning
confidence: 98%
“…The last factor requires an extended comment. Some approaches described in the literature [41,52] were designed and tested on sequences recorded by cameras mounted over a road, where the vehicle movement is usually smooth (e.g. highway).…”
Section: Vehicle Detection and Countingmentioning
confidence: 99%
“…It would require expensive hardware to track objects using state-of-the-art tracking methods, such as particle filters [40]. [18] Edge detection Wolf et al [3] Human gesture recognition, region extraction, contour detection, and template matching Lin et al [23] Gesture recognition Muehlmann et al [50] Real-time tracking Heyrman et al [51] Motion detection Bramberger et al [9] Traffic surveillance, multicamera object tracking Chen and Aghajan [19] Gesture recognition using smart camera network Quaritsch et al [21] Multicamera tracking, camShift Rinner and Wolf [4] scene abstraction Aghajan et al [20] Human pose estimation Sankaranarayanan et al [24] Object detection, recognition, and tracking Tessens et al [30] Foreground detection, subsampling Wang et al [22] Tracking, event detection, and foreground detection Casares and Velipasalar [28] Foreground detection, tracking feedback Sidla et al [52] Traffic monitoring Pletzer et al [53] Traffic monitoring, vehicle speed, and vehicle count Wang et al [29] Foreground detection, contour tracking Cuevas and Garcia [54] Single camera tracking, background modelling In order to decide the best view, smart cameras need to share foreground information with each other. Figure 4 shows the fraction of image area that belongs to the foreground for real surveillance footage of 24 hours.…”
Section: Limitations Of Smart Camerasmentioning
confidence: 99%