Modeling, analysis and synthesis of behaviour are the subject of major efforts in computing science, especially when it comes to technologies that make sense of humanhuman and human-machine interactions. This article outlines some of the most important issues that still need to be addressed to ensure substantial progress in the field, namely 1) development and adoption of virtuous data collection and sharing practices, 2) shift of the focus of interest from individuals to dyads and groups, 3) endowment of artificial agents with internal representations of users and context, 4) modeling of cognitive and semantic processes underlying social behaviour, and 5) identification of application domains and strategies for moving from laboratory to the realworld products.
This interactive demo exhibits a visionary multimodal interaction concept designed to support operators in future control centers. The applied multi-layered hardware and software architecture directly supports the operators in performing their lengthy monitoring and urgent alarm handling tasks. Operators are presented with visual information on three completely configurable levels of screen displays. Gesture interaction via skeleton and finger tracking acts as the main control interaction principle. We particularly developed a special sensor-equipped chair as well as an audio interface allowing for speaking and listening in isolation without any wearable device.
We present an innovative multi-modal interaction concept based on a human-centered design for control centers. The applied multi-layered hardware and software architecture directly supports the users in performing their lengthy monitoring and urgent alarm handling tasks. We combine visual cues, gestural interaction, audio information, and intelligent data processing into a single, universal interface. We further realized the presented concept by a prototypical implementation, using state-of-the-art interaction technologies. Moreover, the paper critically reflects on the long-term applicability of the proposed interfaces and outlines immediate plans for their evaluation. Finally, we indicate several research challenges regarding the real-world application of the presented interaction concepts.
Background
Recent studies underline the importance of cognitive reserve, which is supported by stress reduction, pleasure experience and meditation, for mental health. Mindfulness training is successfully applied to dementia and indicate a lasting positive effect on cognitive reserve, well‐being and motivation. The research project OpenSense investigated the potential of VR‐based intervention and assessment for dementia care in a proof‐of concept study. The VR‐based intervention was developed to foster mindfulness and sensory activation.
Method
VR‐based intervention was applied in persons with dementia (PwD) with Alzheimer’s dementia (AD; n=12, age M=85.0 years, MMSE M=21.5) and healthy controls (n=12, age M=75.1 years, MMSE M=30) using 30 minutes of panoramic video‐based multi‐sensory experiences presenting stimuli that empower relaxation (body‐scanning, beach, forest) and activation (bakery, orchestra). EEG‐based alpha‐band signals (8‐12 Hz) associated with relaxation and inhibitory control were recorded before, during and after intervention and eye tracking was applied during intervention.
Result
Pre‐post EEG analysis showed significant increases in alpha power and brain connectivity for PwD with AD and controls (post > pre, p<.05). EEG baseline alpha power demonstrated higher values for healthy controls than for PwD (AD). Eye movement analysis demonstrated significant differences between PwD (AD) and controls: eye blink rate AD > controls, p=.004(**) from the observation of a 3 minutes video, and significant correlation (Rho=.607, p=.003(**); 3 minutes video) was achieved between eye movements and the Freiburg Mindfulness Inventory score.
Conclusion
The potential of VR‐based intervention based on mindfulness and sensory activation is very promising: the study demonstrated significant increases of EEG alpha power and brain connectivity where PwD usually suffer from decline and gaze data acquired during intervention indicate potential for non‐invasive assessment for decision support. OpenSense anticipates numerous opportunities for novel VR‐based care services for empowering cognitive reserve, inducing sensory activation, raising awareness and motivation for self‐regulation, and as pervasive assessment tool.
In this paper we present an algorithm for segmenting musical audio data. Our aim is to identify solo instrument phrases in polyphonic music. We extract relevant features from the audio to be input into our algorithm. A large corpus of audio descriptors was tested for its ability to discriminate between solo and non-solo sections, which resulted in a subset of five best features. We derived a two-stage algorithm that first creates a set of boundary candidates from local changes of these features and then classifies fixed-length segments according to the desired target classes. The output of the two stages is combined to derive the final segmentation and segment labels. Our system was trained and tested with excerpts from classical pieces and evaluated using full-length recordings, all taken from commercially available audio. We evaluated our algorithm by using precision and recall measurements for the boundary estimation and introduced new evaluation metrics from image processing for the final segmentation. Along with a resulting accuracy of 77%, we demonstrate that the selected features are discriminative for this specific task and achieve reasonable results for the segmentation problem.
The amount of digital music has grown unprecedentedly during the last years and requires the development of effective methods for search and retrieval. In particular, contentbased preference elicitation for music recommendation is a challenging problem that is effectively addressed in this paper. We present a system which automatically generates recommendations and visualizes a user's musical preferences, given her/his accounts on popular online music services. Using these services, the system retrieves a set of tracks preferred by a user, and further computes a semantic description of musical preferences based on raw audio information. For the audio analysis we used the capabilities of the Canoris API. Thereafter, the system generates music recommendations, using a semantic music similarity measure, and a user's preference visualization, mapping semantic descriptors to visual elements.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.