Abstract-Musical beat tracking is one of the effective technologies for human-robot interaction such as musical sessions. Since such interaction should be performed in various environments in a natural way, musical beat tracking for a robot should cope with noise sources such as environmental noise, its own motor noises, and self voices, by using its own microphone. This paper addresses a musical beat tracking robot which can step, scat and sing according to musical beats by using its own microphone. To realize such a robot, we propose a robust beat tracking method by introducing two key techniques, that is, spectro-temporal pattern matching and echo cancellation. The former realizes robust tempo estimation with a shorter window length, thus, it can quickly adapt to tempo changes. The latter is effective to cancel self noises such as stepping, scatting, and singing. We implemented the proposed beat tracking method for Honda ASIMO. Experimental results showed ten times faster adaptation to tempo changes and high robustness in beat tracking for stepping, scatting and singing noises. We also demonstrated the robot times its steps while scatting or singing to musical beats.
This paper presents a model for the behavior and dialogue planning module of conversational service robots. Most of the previously built conversational robots cannot perform dialogue management necessary for accurately recognizing human intentions and providing information to humans. This model integrates robot behavior planning models with spoken dialogue management that is robust enough to engage in mixedinitiative dialogues in specific domains. It has two layers; the upper layer is responsible for global task planning using hierarchical planning and the lower layer engages in local planning by utilizing modules called experts, which are specialized for performing certain kind of tasks by performing physical actions and engaging in dialogues. This model enables switching and canceling tasks based on recognized human intentions. A preliminary implementation of the model, which has been integrated with Honda ASIMO, has shown its effectiveness.Index Terms-conversational robot, service robot, behavior and dialogue planning, dialogue management
This paper presents an intelligence model for conversational service robots. It employs modules called experts, each of which is specialized to execute certain kinds of tasks such as performing physical behaviors and engaging in dialogues. Some of the experts take charge in understanding human utterances and deciding robot utterances or actions. The model enables switching and canceling tasks based on recognized human intentions, as well as parallel execution of several tasks. This model specifies the interface that an expert must have, and any kind of expert can be employed if it conforms to the interface. This feature makes the model extensible.Key words: conversational robot, conversational agent, robot intelligence, behavior and dialogue control, multi-expert model ⋆ This paper is a considerably extended version of [1]. We thank the Association for Computational Linguistics for permission to reuse the material.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.