Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003.
DOI: 10.1109/mfi-2003.2003.1232666
|View full text |Cite
|
Sign up to set email alerts
|

Simple and robust tracking of hands and objects for video-based multimedia production

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 2 publications
0
8
0
Order By: Relevance
“…As a result, subjects have difficulty feeling as they are actually touching a real object. We solve this problem by a skin color matting technique (Itoh et al, 2003) that utilizes a property that clusters the skin region in the chroma space. We defined a skin color model in chroma space in advance and segment the skin color region from captured images using the model.…”
Section: Elimination Of Occlusion By Image Mattingmentioning
confidence: 99%
“…As a result, subjects have difficulty feeling as they are actually touching a real object. We solve this problem by a skin color matting technique (Itoh et al, 2003) that utilizes a property that clusters the skin region in the chroma space. We defined a skin color model in chroma space in advance and segment the skin color region from captured images using the model.…”
Section: Elimination Of Occlusion By Image Mattingmentioning
confidence: 99%
“…The majority of the existing works in vision-based e-learning systems have focused on automatic camera control using single or multiple PTZ cameras to simulate human video shooting. The recent works are found in Onishi & Fukunaga (2004), Onishi et al (2000a), Bianchi (2004), Wallick et al 2004, Rui et al (2003), Itoh et al (2003), Ozeki et al (2002), Ozeki et al 2004, Kameda et al (2000), Kameda et al (2003) and Shimada et al (2004).…”
Section: Automatic Camera Controlmentioning
confidence: 97%
“…It is impossible to accurately register objects at different observation depths under this assumption as the displacement and scaling for each object will depend on the varying perspective effects of the camera. This means that accurate registration can only occur when there is only one observed object in the scene [6] or when all the observed objects are restricted to lie at approximately the same distance from the camera [7]. The global alignment algorithms proposed by Irani & Anandan [8] and Coiras et al [9] do not account or experiment when there are objects at different depths or in different planes in the image.…”
Section: Related Researchmentioning
confidence: 99%