2015 IEEE Winter Applications and Computer Vision Workshops 2015
DOI: 10.1109/wacvw.2015.6
|View full text |Cite
|
Sign up to set email alerts
|

Discovery of Sets of Mutually Orthogonal Vanishing Points in Videos

Abstract: While vanishing point (VP) estimation has received extensive attention, most approaches focus on static images or perform detection and tracking separately. In this paper, we focus on man-made environments and propose a novel method for detecting and tracking groups of mutually orthogonal vanishing points (MOVP), also known as Manhattan frames, jointly from monocular videos. The method is unique in that it is designed to enforce orthogonality in groups of VPs, temporal consistency of each individual MOVP, and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 35 publications
0
5
0
Order By: Relevance
“…Because the road surface is not completely flat, the translation of the host vehicles is not completely pure, which causes a small fluctuation of the R-VPs. To compensate for the fluctuation, possible intersection points are firstly found and an intersection point with the highest vote is detected as R-VP by using voting methods such as MLE (maximal likelihood estimator) voting [ 9 ], probabilistic voting [ 10 ], line-soft-voting [ 12 ], cell-based voting [ 16 ], and RANSAC-based voting [ 18 , 21 , 22 , 24 ]. The next sub-sections summarize the three main processing steps of existing motion-based R-VP detection described in Figure 1 .…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Because the road surface is not completely flat, the translation of the host vehicles is not completely pure, which causes a small fluctuation of the R-VPs. To compensate for the fluctuation, possible intersection points are firstly found and an intersection point with the highest vote is detected as R-VP by using voting methods such as MLE (maximal likelihood estimator) voting [ 9 ], probabilistic voting [ 10 ], line-soft-voting [ 12 ], cell-based voting [ 16 ], and RANSAC-based voting [ 18 , 21 , 22 , 24 ]. The next sub-sections summarize the three main processing steps of existing motion-based R-VP detection described in Figure 1 .…”
Section: Related Workmentioning
confidence: 99%
“…Recently, the number of geometrical feature-based methods for R-VP detection increased significantly. These recent existing methods can be classified into four groups based on types of geometrical features used in the methods, such as line segment-based [ 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 ], edge-based [ 30 , 31 , 32 , 33 ], motion-based [ 34 , 35 , 36 , 37 ], and texture-based methods [ 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 ].…”
Section: Introductionmentioning
confidence: 99%
“…VPs have also been used for estimation of external camera parameters: orientation of camera to scene [4,15], 3D shape to camera [3], and as additional constraints for full camera poses [22]. Often, further scene-dependent VP constraints are included: mutual VP orthogonality (Manhattan World) [10,11,13,33], sets of mutually orthogonal VPs [28,16], with a shared vertical VP (Atlanta World) [2,27].…”
Section: Related Workmentioning
confidence: 99%
“…[13] extracts orthogonal VPs independently in multiple views, and integrates information across views by using SfM (Structurefrom-Motion) camera pose estimates. [11,16] explicitly track orthogonal VPs in videos. [11] extracts VPs separately in each frame, and greedily links VPs across frames.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation