In this paper, we present a complete platform for the semiautomatic and simultaneous generation of human-machine dialog applications in two different and separate modalities (Voice and Web) and several languages to provide services oriented to obtaining or modifying the information from a database (data-centered). Given that one of the main objectives of the platform is to unify the application design process regardless of its modality or language and then to complete it with the specific details of each one, the design process begins with a general description of the application, the data model, the database access functions, and a generic finite state diagram consisting of the application flow. With this information, the actions to be carried out in each state of the dialog are defined. Then, the specific characteristics of each modality and language (grammars, prompts, presentation aspects, user levels, etc.) are specified in later assistants. Finally, the scripts that execute the application in the real-time system are automatically generated.We describe each assistant in detail, emphasizing the methodologies followed to ease the design process, especially in its critical aspects. We also describe different strategies and characteristics that we have applied to provide portability, robustness, adaptability and high performance to the platform. We also address important issues in dialog applications such as mixed initiative and over-answering, confirmation handling or providing long lists of information to the user. Finally, the results obtained in a subjective evaluation with different designers and in the creation of two full applications that confirm the usability, flexibility and standardization of the platform, and provide new research directions.
We present a new approach for egomotion computation and the detection of independent motion in the scene. In contrast to related work we apply statistical methods which are based on the normal optical flow field. We extract features for supervised and unsupervised training from the normal optical flow field in order to train a Gaussian-distribution classifier (GDC) and a Kohonen feature map. Finally, in a test phase the egomotion computation is done by classifying features extracted from the normal optical flow field into the unknown motion direction. For the detection of independent motion, the scene is divided into regions. For each region a decision is made, whether the normal flow in this region is based on the camera motion or an independently moving object. We present results of this approach which show a recognition rate of up to 97% for the egomotion classification and a detection rate of moving objects of up to 87%. MOTIVATIONApplications of state of the art image analysis can be found in autonomous ground vehicles, robotics, industrial production, etc. Since the camera is itself a moving part in many of these systems, estimation of the viewer's motion is as important as the detection of independently moving objects in the scene. [l, 2, 81 Similar problems have to be solved in active vision systems where -in addition to various changes of the camera parameters -the camera is moved purposively in order to solve the vision tasks more efficiently. lThis work was funded partially by the German Research Foundation (DFG) under grant number SFB 182. Only the authors are responsible for the contents. 0-7803-3258-X/96/$5.00 0 1996 IEEE 517We have to know the egomotion in order t o detect independently moving (objects in the scene, and conversely, we also have to know about moving objects in order to estimate the egomotion. A solution to this chicken-egg problem is to compute global features from the motion field [4].In this contribution we present a new approach, which extends global feature detection by statistical classification in order to detect independently moving objects and to estimate camera motion. Additionally, a Kohonen feature map [6] is trained which allows for an unsupervised clustering of the features into different motion classes. THEORETICAL BACKGROUNDIn [4] an approach for the calculation of egomotion has been proposed. It could be shown, that for every egomotion, described by the rotation (a,p,-y) and translation (U, w, w), there exists a certain motion pattern (cf. Fig. 1, left) in the image plane calculated from the normal flow, i.e. the optical flow projected on the image gradient between two images. The unknown motion parameter of rotation (CY, p,-y) and the focus of expansion (20, yo) = (%, %) are estimated by a search in the motion parameter space ( a , ,f3, y, U , w, tu) t o match the significant motion patterns with the observed motion field in the image. More details of this global feature extraction scheme can be found in [4, 51.This method of motion analysis is based on an computati...
No abstract
The EC funded research project GEMINI (Generic Environment for Multilingual Interactive Natural Interfaces) has two main objectives: On the one hand the development and implementation of a platform able to produce user-friendly interactive multilingual and multi-modal dialogue interfaces to databases with a minimum of human effort and on the other hand the demonstration of the platform's efficiency through the development of two different applications using this platform. The platform consists of different assistants that help the user to semi-automatically generate dialogue applications. Its open and modular architecture simplifies the adaptability of generated applications to different use cases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.