Deep learning, one of the fastest-growing branches of artificial intelligence, has become one of the most relevant research and development areas of the last years, especially since 2012, when a neural network surpassed the most advanced image classification techniques of the time. This spectacular development has not been alien to the world of the arts, as recent advances in generative networks have made possible the artificial creation of high-quality content such as images, movies or music. We believe that these novel generative models propose a great challenge to our current understanding of computational creativity. If a robot can now create music that an expert cannot distinguish from music composed by a human, or create novel musical entities that were not known at training time, or exhibit conceptual leaps, does it mean that the machine is then creative? We believe that the emergence of these generative models clearly signals that much more research needs to be done in this area. We would like to contribute to this debate with two case studies of our own: TimbreNet, a variational auto-encoder network trained to generate audio-based musical chords, and StyleGAN Pianorolls, a generative adversarial network capable of creating short musical excerpts, despite the fact that it was trained with images and not musical data. We discuss and assess these generative models in terms of their creativity and we show that they are in practice capable of learning musical concepts that are not obvious based on the training data, and we hypothesize that these deep models, based on our current understanding of creativity in robots and machines, can be considered, in fact, creative.
Resumen • En el presente artículo se abordan algunos de los desafíos y características más relevantes de la creación musical en la era postdigital, desde el particular punto de vista de un compositor. Se otorga un especial énfasis a tres áreas al interior de la creación musical: la música electroacústica, la composición algorítmica y las nuevas interfaces de expresión musical. El artículo presenta tanto discusiones teóricas como ejemplos tomados de la producción artística de quien escribe. Palabras clave: creación artística, era postdigital, música electroacústica, composición algorítmica, nuevas interfaces de expresión musical.
This article describes a synthesis technique based on the sonification of the dynamic behavior of a quantum particle enclosed in an infinite square well. More specifically, we sonify the momentum distribution of a one-dimensional Gaussian bouncing wave packet model. We have chosen this particular case because of its relative simplicity and interesting dynamic behavior, which makes it suitable for a novel sonification mapping that can be applied to standard synthesis techniques, resulting in the generation of appealing sounds. In addition, this sonification might provide useful insight into the behavior of the quantum particle. In particular, this model exhibits quantum revivals, minimizes uncertainty, and exhibits similarities to the case of a classical bouncing ball. The proposed model has been implemented in real time in both the Max/MSP and the Pure Data environments. The algorithm is based on concepts of additive synthesis where each oscillator describes the eigenfunctions that characterize the state evolution of the wave packet. We also provide an analysis of the sounds produced by the model from both a physical and a perceptual point of view.
While it may be impossible to determine precisely what defines appealing and interesting music, it can be argued that, in order to have these qualities, sounds should contain enough elements of order and disorder, coherence and decoherence, or structure and variation throughout a musical piece. These can appear in different ways and at various levels; in the interplay between expected and surprising note sequences found in engaging melodies, in the superposition of perfectly and imperfectly coinciding harmonics required for a rich chord, or in the mixture of regular and syncopated rhythms that build a driving percussion layer, to name a few. Another important component is the inclusion of multiple elements that can both combine into a coherent musical structure or disband into independently acting parts. Music is often based entirely on fragile harmonic or rhythmic structures that fluctuate between states of order and disorder, constantly consolidating and collapsing. Composers explicitly explore these extremes, for example, in the interplay between instruments in an orchestra, between melodic lines in counterpoint [1], between collective and solo parts in a Jazz performance [2], or in the dense chromatic melodies of some contemporary music [3].
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.