We have applied interactive machine learning (IML) to the creation and customisation of gesturally controlled musical interfaces in six workshops with people with learning and physical disabilities. Our observations and discussions with participants demonstrate the utility of IML as a tool for participatory design of accessible interfaces. This work has also led to a better understanding of challenges in end-user training of learning models, of how people develop personalised interaction strategies with different types of pre-trained interfaces, and of how properties of control spaces and input devices influence people's customisation strategies and engagement with instruments. This work has also uncovered similarities between the musical goals and practices of disabled people and those of expert musicians.
How to motivate and support behaviour change through design is becoming of increasing interest to the CHI community. In this paper, we present our experiences of building systems that motivate people to engage in upper limb rehabilitation exercise after stroke. We report on participatory design work with four stroke survivors to develop a holistic understanding of their motivation and rehabilitation needs, and to construct and deploy engaging interactive systems that satisfy these. We reflect on the limits of motivational theories in trying to design for the lived experience of motivation and highlight lessons learnt around: helping people articulate what motivates them; balancing work, duty, fun; supporting motivation over time; and understanding the wider social context. From these we identify design guidelines that can inform a toolkit approach to support both scalability and personalisability.
We introduce a new framework for manipulating and interacting with deep generative models that we call network bending. We present a comprehensive set of deterministic transformations that can be inserted as distinct layers into the computational graph of a trained generative neural network and applied during inference. In addition, we present a novel algorithm for analysing the deep generative model and clustering features based on their spatial activation maps. This allows features to be grouped together based on spatial similarity in an unsupervised fashion. This results in the meaningful manipulation of sets of features that correspond to the generation of a broad array of semantically significant features of the generated images. We outline this framework, demonstrating our results on state-of-the-art deep generative models trained on several image datasets. We show how it allows for the direct manipulation of semantically meaningful aspects of the generative process as well as allowing for a broad range of expressive outcomes.
Machine learning offers great potential to developers and end users in the creative industries. For example, it can support new sensor-based interactions, procedural content generation and enduser product customisation. However, designing machine learning toolkits for adoption by creative developers is still a nascent effort. This work focuses on the application of user-centred design with creative end-user developers for informing the design of an interactive machine learning toolkit. We introduce a framework for user-centred design actions that we developed within the context of an European Union innovation project, RAPID-MIX. We illustrate the application of the framework with two actions for lightweight formative evaluation of our toolkit-the JUCE Machine Learning Hackathon and the RAPID-MIX API workshop at eNTERFACE'17. We describe how we used these actions to uncover conceptual and technical limitations. We also discuss how these actions provided us with a better understanding of users, helped us to refine the scope of the design space, and informed improvements to the toolkit. We conclude with a reflection about the knowledge we obtained from applying user-centred design to creative technology, in the context of an innovation project in the creative industries.
ABSTRACT'Blade Runner-Autoencoded' is a film made by training an autoencoder-a type of generative neural network-to recreate frames from the film Blade Runner. The autoencoder is made to reinterpret every individual frame, reconstructing it based on its memory of the film. The result is a hazy, dreamlike version of the original film. The project explores the aesthetic qualities of the disembodied gaze of the neural network. The autoencoder is also capable of representing images from films it has not seen based on what it has learned from watching Blade Runner.
IntroductionReconstructing videos based on prior visual information has some scientific and artistic
Figure 1: Key steps in algorithm: We extract a focal stack from the light field, create a depth map and all-in-focus image, given a user defined mask (center) we first perform image completion on the depth map then the all-in-focus image, finally we propagate the synthesised image segment through the focal stack which can then be used to re-sample the light field.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.