2001
DOI: 10.1109/5254.956081
|View full text |Cite
|
Sign up to set email alerts
|

Jijo-2: an office robot that communicates and learns

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
41
0

Year Published

2007
2007
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 57 publications
(41 citation statements)
references
References 15 publications
0
41
0
Order By: Relevance
“…Most solutions are vision-based and use algorithms for face recognition [40,43,44], or in some case gait and full body analysis [18,33]. However, just a few recognition systems are actually implemented on real mobile robots, as their perception capabilities are limited by sensor uncertainty, motion and changes in the environment [2,10,15].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Most solutions are vision-based and use algorithms for face recognition [40,43,44], or in some case gait and full body analysis [18,33]. However, just a few recognition systems are actually implemented on real mobile robots, as their perception capabilities are limited by sensor uncertainty, motion and changes in the environment [2,10,15].…”
Section: Related Workmentioning
confidence: 99%
“…Most of existing robotic systems provided with visionbased human recognition operate in two separate steps: first, a frame is selected where the subjects satisfies some criteria, like pose, size or number of visible features; then, some standard recognition algorithm is applied versus a fixed database of known people [2,15,30]. Unfortunately, this approach ignores important clues like time and spatial evolution of the subject to be identified.…”
mentioning
confidence: 99%
“…Similarly, MARVIN gives tours around the lab where it was developed (Koch, Jung, Wettach, Nemeth and Berns 2008). JIJO-2 is an office robot that can guide visitors, deliver messages and arrange meetings (Asoh et al 2001). ARMAR, another robot with a partially humanoid configuration, performs kitchen-related tasks (Stiefelhagen et al 2004).…”
Section: Robots With Languagementioning
confidence: 99%
“…The HRP-2 humanoid robot, for example, performs face detection and even portrait drawing (Ido, Matsumoto, Ogasawara and Nisimura 2006). Face detection and recognition is carried out by JIJO-2 (Asoh et al 2001). Pointing gestures of the human interlocutor, as well as gaze direction, are used to constrain interpretations by BIRON (Toptsis, Haasch, Hüwel, Fritsch and Fink 2005), ARMAR (Stiefelhagen et al 2004) and the robot of Hanafiah, Yamazaki, Nakamura and Kuno (2004).…”
Section: Robots With Languagementioning
confidence: 99%
“…This reflects the difficulty to simultaneously serve the demanding goals of task-and config-interaction. A few examples of such systems acting in a robotics or pure vision domain can be found in [2,3,4,5]. Some of our own earlier work along these lines is summarised in [6,7].…”
Section: A Dual Interaction Perspective On Artificial Systemsmentioning
confidence: 99%