Abstract:We present and evaluate the EyeHarp, a new gaze-controlled Digital Musical Instrument, which aims to enable people with severe motor disabilities to learn, perform, and compose music using only their gaze as control mechanism. It consists of (1) a step-sequencer layer, which serves for constructing chords/arpeggios, and (2) a melody layer, for playing melodies and changing the chords/arpeggios. We have conducted a pilot evaluation of the EyeHarp involving 39 participants with no disabilities from both a perfor… Show more
“…In Vamvakousis et al [13], the Amyotrophic Lateral Sclerosis (ALS) patients were expressing their emotions through music in real time. They used (1) and (2) for detecting valence and arousal, respectively.…”
Section: ) Frontal Eeg Asymmetrymentioning
confidence: 99%
“…There has been a lot of research that investigated neural correlates of emotion in humans [6], [9], [12], [13]. Frontal activity, which is characterized in terms of decreased power in the alpha band, has been consistently found to be associated with emotional states [11].…”
Section: ) Frontal Eeg Asymmetrymentioning
confidence: 99%
“…Besides EEG applications, it has been widely used for numerous applications in engineering, science, and mathematics. In this study, each EEG signal is decomposed using PSD approach into four distinct frequency ranges: theta (4-8 Hz), alpha (8-13 Hz), beta (13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30), and gamma (30-40 Hz). The PSDs were computed using Python Signal Processing Toolbox (mne), and the average of power over a specific frequency range was calculated to construct a feature using the avgpower function in the toolbox.…”
Abstract-Estimationof human emotions from Electroencephalogram (EEG) signals plays a vital role in developing robust Brain-Computer Interface (BCI) systems. In our research, we used Deep Neural Network (DNN) to address EEG-based emotion recognition. This was motivated by the recent advances in accuracy and efficiency from applying deep learning techniques in pattern recognition and classification applications. We adapted DNN to identify human emotions of a given EEG signal (DEAP dataset) from power spectral density (PSD) and frontal asymmetry features. The proposed approach is compared to state-of-the-art emotion detection systems on the same dataset. Results show how EEG based emotion recognition can greatly benefit from using DNNs, especially when a large amount of training data is available.
“…In Vamvakousis et al [13], the Amyotrophic Lateral Sclerosis (ALS) patients were expressing their emotions through music in real time. They used (1) and (2) for detecting valence and arousal, respectively.…”
Section: ) Frontal Eeg Asymmetrymentioning
confidence: 99%
“…There has been a lot of research that investigated neural correlates of emotion in humans [6], [9], [12], [13]. Frontal activity, which is characterized in terms of decreased power in the alpha band, has been consistently found to be associated with emotional states [11].…”
Section: ) Frontal Eeg Asymmetrymentioning
confidence: 99%
“…Besides EEG applications, it has been widely used for numerous applications in engineering, science, and mathematics. In this study, each EEG signal is decomposed using PSD approach into four distinct frequency ranges: theta (4-8 Hz), alpha (8-13 Hz), beta (13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30), and gamma (30-40 Hz). The PSDs were computed using Python Signal Processing Toolbox (mne), and the average of power over a specific frequency range was calculated to construct a feature using the avgpower function in the toolbox.…”
Abstract-Estimationof human emotions from Electroencephalogram (EEG) signals plays a vital role in developing robust Brain-Computer Interface (BCI) systems. In our research, we used Deep Neural Network (DNN) to address EEG-based emotion recognition. This was motivated by the recent advances in accuracy and efficiency from applying deep learning techniques in pattern recognition and classification applications. We adapted DNN to identify human emotions of a given EEG signal (DEAP dataset) from power spectral density (PSD) and frontal asymmetry features. The proposed approach is compared to state-of-the-art emotion detection systems on the same dataset. Results show how EEG based emotion recognition can greatly benefit from using DNNs, especially when a large amount of training data is available.
“…The term "adaptive music" is also used by Graham-Knight and Tzanetakis [30], who define it as the use of digital technologies to allow a person who cannot otherwise play a traditional musical instrument to play music unaided. Moreover, the word "adaptive" is used in a study by Vamvakousis and Ramirez [42], who specifically refer to the notion of "Adaptive Digital Musical Instruments". Among these terms, you also encounter "inclusive music" [7], defined by Samuels as the use of music interfaces aimed at overcoming disabling barriers to music-making faced by people with disabilities [21].…”
Current advancements in music technology enable the creation of customized Digital Musical Instruments (DMIs). This paper presents a systematic review of Accessible Digital Musical Instruments (ADMIs) in inclusive music practice. History of research concerned with facilitating inclusion in music-making is outlined, and current state of developments and trends in the field are discussed. Although the use of music technology in music therapy contexts has attracted more attention in recent years, the topic has been relatively unexplored in Computer Music literature. This review investigates a total of 113 publications focusing on ADMIs. Based on the 83 instruments in this dataset, ten control interface types were identified: tangible controllers, touchless controllers, Brain–Computer Music Interfaces (BCMIs), adapted instruments, wearable controllers or prosthetic devices, mouth-operated controllers, audio controllers, gaze controllers, touchscreen controllers and mouse-controlled interfaces. The majority of the AMDIs were tangible or physical controllers. Although the haptic modality could potentially play an important role in musical interaction for many user groups, relatively few of the ADMIs (14.5%) incorporated vibrotactile feedback. Aspects judged to be important for successful ADMI design were instrument adaptability and customization, user participation, iterative prototyping, and interdisciplinary development teams.
“…There are examples in terms of common interests for both Music Technology and AD. Vamvakousis and Ramirez (2012) showed the EyeHarp, an eye-tracking musical interface for controlling melodic, harmonic and expressive aspects of musical instruments in real time, have similar expressive potentials to a traditional musical instrument. In addition, many AD papers related to music could be found within the scope of International Conferences on Auditory Display (ICADs).…”
This paper presents a relationship between Auditory Display (AD) and the domains of music and acoustics. First, some basic notions of the Auditory Display area are shortly outlined. Then, the research trends and system solutions within the fields of music technology, music information retrieval and music recommendation and acoustics that are within the scope of AD are discussed. Finally, an example of AD solution based on gaze tracking that may facilitate music annotation process is shown. The paper concludes with a few remarks about directions for further research in the domains discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.