2020
DOI: 10.1109/lra.2019.2959476
|View full text |Cite
|
Sign up to set email alerts
|

Vid2Param: Modeling of Dynamics Parameters From Video

Abstract: Videos provide a rich source of information, but it is generally hard to extract dynamical parameters of interest. Inferring those parameters from a video stream would be beneficial for physical reasoning. Robots performing tasks in dynamic environments would benefit greatly from understanding the underlying environment motion, in order to make future predictions and to synthesize effective control policies that use this inductive bias. Online physical reasoning is therefore a fundamental requirement for robus… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(13 citation statements)
references
References 33 publications
0
13
0
Order By: Relevance
“…Some studies were conducted to predict the future movement of balls rolling in elliptical bowls [8,32], and others were conducted to predict future object states and future frames by inputting short video sequences [9,33] using the spatial transform network [34]. A study was also conducted to predict the future trajectories of a bouncing ball [5] by using a variational recurrent neural network [35].…”
Section: Learning Explicit Physicsmentioning
confidence: 99%
See 4 more Smart Citations
“…Some studies were conducted to predict the future movement of balls rolling in elliptical bowls [8,32], and others were conducted to predict future object states and future frames by inputting short video sequences [9,33] using the spatial transform network [34]. A study was also conducted to predict the future trajectories of a bouncing ball [5] by using a variational recurrent neural network [35].…”
Section: Learning Explicit Physicsmentioning
confidence: 99%
“…Some of these models were re-trained using real data to apply to real images [10]. Some other models were trained with synthetic data and applied directly to real data by generating synthetic data according to domain randomization techniques [5,40].…”
Section: Learning For Real Scenesmentioning
confidence: 99%
See 3 more Smart Citations