Model-driven software engineering (MDE) is a well-known approach for developing software. It reduces complexity, facilitates maintenance and allows for the simulation, verification, validation and execution of software models. In this article, we show how MDE and model execution can be leveraged in the context of human-computer interaction (HCI). We claim that in this application domain it is beneficial to use heterogeneous models, combining different models of computation for different components of the system. We report on a case study that we have carried out to develop an executable model of a gesture-based application for manipulating 3D objects, using the Kinect sensor as input device, and the OGRE graphical engine as output device for realtime rendering. The interaction part of this application is fully specified as an executable heterogeneous model with the ModHel'X modeling environment. We exploit the semantic adaptation between different models of computation to implement a layered application using the most appropriate models of computation for each layer.
To cite this version:Romuald Deshayes, Tom Mens, Philippe Palanque. PetriNect: A tool for executable modeling of gestural interaction. Symposium on Visual Languages and Human-Centric Computing (VL/HCC 2013), Sep 2013, San Jose, CA, United States. pp. 197-198. hal-01178577 Open Archive TOULOUSE Archive Ouverte (OATAO)OATAO is an open access repository that collects the work of Toulouse researchers and makes it freely available over the web where possible. Abstract-In this showpiece we demonstrate PetriNect, an instance of a generic layered framework that we have developed for the specification and use of executable models of gestural interaction with virtual objects. The framework is built on top of Petshop and uses ICO models, a variant of high-level Petri nets. PetriNect uses the Kinect as input device for allowing the user to interact gesturally with virtual objects. We present two simple proof-of-concept prototype applications that have been developed for the purpose of this showpiece: a simple Pong game, and the interaction with a virtual bookshelf.
This paper presents GISMO, an extensible domain-specific modelling language for prototyping executable models of gestural interaction. Relying on an underlying customisable framework, domain-specific models can specify, simulate and execute the behaviour of how users interact with a software application through the use of different interaction controllers and gesture types (e.g., specific hand movements or other body gestures). Model transformation technology is used to define the domain-specific operational semantics of GISMO, as well as to verify domain-specific properties. ICO models are automatically generated from GISMO models, and are executed by an underlying framework that can communicate with the target software application. We illustrate the use of GISMO through a running example that models the gestural interaction of a graphical application using dynamic hand gestures to control an animated 3D character. We report on the usability of GISMO based on an evaluation with 12 participants.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.