1990
DOI: 10.1007/bf00126501
|View full text |Cite
|
Sign up to set email alerts
|

The feasibility of motion and structure from noisy time-varying image velocity information

Abstract: This research addresses the problem of noise sensitivity inherent in motion and structure algorithms. The motion and structure paradigm is a two-step process. First, we measure image velocities and, perhaps, their spatial and temporal derivatives, are obtained from time-varying image intensity data and second, we use these data to compute the motion of a moving monocular observer in a stationary environment under perspective projection, relative to a single 3-D planar surface. The first contribution of this ar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
12
0

Year Published

1990
1990
2008
2008

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 35 publications
(12 citation statements)
references
References 33 publications
0
12
0
Order By: Relevance
“…An advantage of these methods is that the motion in each patch is computed independently, so the methods can deal with multiple moving objects. A problem with these methods is that they are sensitive to errors in the flow-field measurements, particularly for small patches (see Waxman and Wohn [1988] and Barron et al [1990]). …”
Section: Instantaneous-eme Algorithmsmentioning
confidence: 99%
“…An advantage of these methods is that the motion in each patch is computed independently, so the methods can deal with multiple moving objects. A problem with these methods is that they are sensitive to errors in the flow-field measurements, particularly for small patches (see Waxman and Wohn [1988] and Barron et al [1990]). …”
Section: Instantaneous-eme Algorithmsmentioning
confidence: 99%
“…Fortunately, in most vision applications a crude degree of separation is sufficient in that the occurrence of more than two or three velocities in a small neighborhood is unlikely. This also means, however, that a subsequent stage of processing is required because the accuracy required for tasks such as the determination of ego-motion and surface parameters is greater than the tuning width of single filters (Barron 1988). Previous frequency-based approaches toward this end have been amplitude-based, and have sacrificed velocity resolution as a consequence of using the relative amplitudes of differently tuned f'flters (Adelson and Bergen 1986;Heeger 1987Heeger , 1988.…”
Section: Introductionmentioning
confidence: 99%
“…Since it is impossible to determine absolute values of the translation and depth using monocular schemes, 3D interpretation can only be achieved by applying an arbitrary scale factor to the relative 0020-0255/$ -see front matter Ó 2007 Elsevier Inc. All translational motion and depth values [1]. Secondly, virtually none of the parameter reconstruction techniques presented in the literature provide reliable results when applied to the optical flow fields calculated from realistic scenes due to the difficulties involved in extracting accurate flow fields [6]. Thirdly, most parameter estimation algorithms designed to solve equations of motion are characterized by some form of nonlinearity.…”
Section: Introductionmentioning
confidence: 97%
“…However, a number of major problems exist in the 3D motion parameter estimation field. Firstly, while monocular observers designed to visualize relative motion within a scene have the benefit of an extremely simple hardware structure, they are unable to recover translational motion parameters or the coordinates of 3D structures with any degree of reliability due to their inherent depth-speed ambiguity [6]. Since it is impossible to determine absolute values of the translation and depth using monocular schemes, 3D interpretation can only be achieved by applying an arbitrary scale factor to the relative 0020-0255/$ -see front matter Ó 2007 Elsevier Inc. All translational motion and depth values [1].…”
Section: Introductionmentioning
confidence: 99%