Abstract. The description of a gesture requires temporal analysis of values generated by input sensors and does not fit well the observer pattern traditionally used by frameworks to handle user input. The current solution is to embed particular gesture-based interactions, such as pinch-to-zoom, into frameworks by notifying when a whole gesture is detected. This approach suffers from a lack of flexibility unless the programmer performs explicit temporal analysis of raw sensors data. This paper proposes a compositional, declarative meta-model for gestures definition based on Petri Nets. Basic traits are used as building blocks for defining gestures; each one notifies the change of a feature value. A complex gesture is defined by the composition of other sub-gestures using a set of operators. The user interface behaviour can be associated to the recognition of the whole gesture or to any other subcomponent, addressing the problem of granularity for the notification events. The meta-model can be instantiated for different gesture recognition supports and its definition has been validated through a proof of concept library. Sample applications have been developed for supporting multitouch gestures on iOS and full body gestures with Microsoft Kinect.
Interactive games in museums User interface software and technology a b s t r a c tIn this paper, we propose UbiCicero, a multi-device, location-aware museum guide able to opportunistically exploit large screens when users are nearby. Various types of games are included in addition to the museum and artwork descriptions. The mobile guide is equipped with an RFID reader, which detects nearby tagged artworks. By taking into account context-dependent information, including the current user position and behaviour history, as well as the type of device available, more personalised and relevant information is provided to the user, enabling a richer overall experience. We also present example applications of this solution and then discuss the results of first empirical tests performed to evaluate the usefulness and usability of the enhanced multi-device guide.
Abstract. This paper presents a set of tools to support multimodal adaptive Web applications. The contributions include a novel solution for generating multimodal interactive applications, which can be executed in any browserenabled device; and run-time support for obtaining multimodal adaptations at various granularity levels, which can be specified through a language for adaptation rules. The architecture is able to exploit model-based user interface descriptions and adaptation rules in order to achieve adaptive behaviour that can be triggered by dynamic changes in the context of use. We also report on an example application and a user test concerning adaptation rules changing dynamically its multimodality.
a b s t r a c tThis paper presents a method and the associated authoring tool for supporting the development of interactive applications able to access multiple Web Services, even from different types of interactive devices. We show how model-based descriptions are useful for this purpose and describe the associated automatic support along with the underlying rules. The proposed environment is able to aid in the design of new interactive applications that access pre-existing Web Services, which may contain annotations supporting the user interface development. This is achieved through the use of task models as a starting point for the design and development of the corresponding implementations. We also provide an example to better illustrate the features of the approach, and report on two evaluations conducted to assess the support tool.
In this paper, we report on the development of touchless interfaces\ud
for supporting long lasting tasks, which need an interleaving between the interaction\ud
with the system and the focus on other activities. As an example, we\ud
considered a dish cooking task, which enables selecting and browsing the information\ud
about different recipes while cooking through gestural and vocal interaction.\ud
The application demonstrates the advantages offered by the GestIT library,\ud
which allows a declarative and compositional definition of reusable gesture
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.