There is an identified lack of visual feedback in electronic music performances. Live visuals have been used to fill in this gap. However, there is a scarcity of studies that analyze the effectiveness of live visuals in conveying feedback. In this paper, we aim to study the contribution of live visuals to the understanding of electronic music performances, from the perspective of the audience. We present related work in the fields of audience studies in performing arts, electronic music and audiovisuals. For this purpose, we organized two live events, where 10 audiovisual performances took place. We used questionnaires to conduct an audience study in these events. Results point to a better audience understanding in two of the four design patterns we used as analytical framework. In our discussion, we suggest best practices for the design of audiovisual performance systems that can lead to improved audience understanding.
In this paper, we propose to consider the sonic interactions that occurs in a dance performance from an ecological perspective. In particular, we suggest using the conceptual models of artefact ecology and design space. As a case study, we present a work developed during a two weeks artistic residency in collaboration between a sound designer, one choreographer, and two dancers. During the residency both an interactive sound artefact based on a motion capture system, and a dance performance were developed. We present the ecology of an interactive sound artefact developed for the dance performance, with the objective to analyse how the ecology of multiple actors relate themselves to the interactive artefact. CCS CONCEPTS • Human-centered computing → Interaction design process and methods; Empirical studies in interaction design; • Applied computing → Sound and music computing; Performing arts.
This paper describes a video annotation tool based on a new and flexible model, that gives several perspectives over the same video content. The model was designed in a way that allows having multiple views over the same video data, enabling users with different requirements to have the most appropriate interface. These views, video-lenses, highlight a specific aspect of the video content that is being annotated. Annotations are made using a timeline based interface with multiple tracks, where each track corresponds to a given video-lens. The format used to store and exchange the information is the MPEG-7 standard. The annotation tool (VAnnotator) is being developed in the scope of Vizard, an ambitious project that aims to define a new paradigm for video navigation, annotation, editing and retrieval. The Vizard project includes users, both from the production/archiving area and from the consumer electronics area, that help to define and validate the annotation requirements and functionality.
The combined use of sound and image has a rich history, from audiovisual artworks to research exploring the potential of data visualization and sonification. However, we lack standard tools or guidelines for audiovisual (AV) interaction design, particularly for live performance. We propose the AVUI (AudioVisual User Interface), where sound and image are used together in a cohesive way in the interface; and an enabling technology, the ofxAVUI toolkit. AVUI guidelines and ofx-AVUI were developed in a three-stage process, together with AV producers: 1) participatory design activities; 2) prototype development; 3) encapsulation of prototype as a plug-in, evaluation, and roll out. Best practices identified include: reconfigurable interfaces and mappings; object-oriented packaging of AV and UI; diverse sound visualization; flexible media manipulation and management. The toolkit and a mobile app developed using it have been released as open-source. Guidelines and toolkit demonstrate the potential of AVUI and offer designers a convenient framework for AV interaction design.
nmc@di.fct.unl.pt This paper describes the principles and a model for adding content and structure to existing video materials, based on annotation. Annotations in printed media promote active reading and, in a similar way, annotations in video promote active watching. The principles and model are illustrated by a prototype system for video annotation and browsing, named AntV (Annotations in Video).
This paper presents ARCAA (Actors, Role, Context, Activity, Artefacts), a framework that supports designers in understanding the artefact ecology in the music performance scenario, in particular, allowing to frame the role of the different actors. The ARCAA framework is the result of the combination of two different areas of HCI: artefact ecology concept, and design framework for digital musical instruments. The model borrows three categories from MINUET an established design framework and rethinks them from an ecological perspective. In ARCAA, these three categories are used as three lenses to connect each human actor to her artefact ecology. Finally, the framework allows comparing how the various artefacts create connections among the different people involved. The second part of the paper describes a case study that shows a practical adoption of the framework. CCS CONCEPTS • Human-centered computing → HCI theory, concepts and models; Interaction design theory, concepts and paradigms; HCI theory, concepts and models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.