2007
DOI: 10.1016/j.cviu.2006.10.014
|View full text |Cite
|
Sign up to set email alerts
|

Vision-based motion estimation for interaction with mobile devices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2008
2008
2012
2012

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 27 publications
(18 citation statements)
references
References 9 publications
0
18
0
Order By: Relevance
“…According to [33], the best architecture is a superscalar, superpipelined structure. This design strategy has been adopted and the resulting system allows our system to be used as an embedded coprocessor in the resolution of other problems, for inclusion in mobile devices such as the one described in [34] for instance. In Fig.…”
Section: From Coarse To Superpipeline Architecturementioning
confidence: 99%
“…According to [33], the best architecture is a superscalar, superpipelined structure. This design strategy has been adopted and the resulting system allows our system to be used as an embedded coprocessor in the resolution of other problems, for inclusion in mobile devices such as the one described in [34] for instance. In Fig.…”
Section: From Coarse To Superpipeline Architecturementioning
confidence: 99%
“…This movement is generally mapped to the pointer displacement as an alternative to a mouse or a touch screen. Since most mobile devices have a single camera, the movement is typically tracked using feature-based algorithms such as optical flow and marker tracking (Haro et al, 2005;Wang et al, 2006;Hannuksela et al, 2007;, instead of homography induced from multiple views. However, feature-based tracking can suffer from a lack of robust features or error accumulation (Barron et al, 1994), leading to the need for more sensors for accurate tracking.…”
Section: Motion-sensing Interfaces For Mobile Devicesmentioning
confidence: 99%
“…5. For details, please see the paper by Hannuksela et al 8 The blocks in the top left present the selected image regions to be used, while the lines in the top right image illustrate the block displacement estimates, d, and ellipses show the related uncertainties. The bottom left image shows the trusted features that are used for parametric model fitting.…”
Section: Motion Based User Interfacementioning
confidence: 99%
“…8 Our approach utilises the feature based motion analysis where a sparse set of blocks are first selected from one image and then their displacements are determined. In order to improve the accuracy of the motion information, an uncertainty of these features is also analysed.…”
Section: Motion Based User Interfacementioning
confidence: 99%