Anecdotal reports of superior estimation abilities in autistic individuals (e.g., Sacks, 1985) have never been confirmed empirically. We present here case studies of 2 children with autistic spectrum diagnoses and report remarkable abilities in estimation for several quantifiable dimensions. K.T. and G.T. were tested at 9 years of age for estimation of rank, numerosity, time, weight, length, surface, distance, and precise enumeration for small numbers. Their performances were compared to those of 6 age- and IQ- matched comparison children. K.T. demonstrated a superior level of performance in estimating rank (e.g., which set has larger numerosity?) but his performance in other tasks was average. G.T. displayed outstanding performance in estimating numerosity, time, weight, surface, length, and distance, with average performance in other tasks. These results show that certain autistic spectrum individuals may develop superior and highly specialized abilities in estimation. We discuss these findings in relation to the role of "veridical mapping" in the development of special ability (Mottron, Dawson, & Soulieres, 2009; Mottron, Dawson, Soulieres, Hubert, & Burack, 2006a). Veridical mapping is the detection of isomorphism within a code, between two codes, or between one code and isomorphic elements of the world. Within this framework, it is proposed that estimation abilities, like absolute pitch, rely on the ability to map a verbal code with a specific magnitude of a psychophysical dimension.
While audio data play an increasingly central role in computer-based music production, interaction with large sound collections in most available music creation and production environments is very often still limited to scrolling long lists of file names. This paper describes a general framework for devising interactive applications based on the content-based visualization of sound collections. The proposed framework allows for a modular combination of different techniques for sound segmentation, analysis, and dimensionality reduction, using the reduced feature space for interactive applications. We analyze several prototypes presented in the literature and describe their limitations. We propose a more general framework that can be used flexibly to devise music creation interfaces. The proposed approach includes several novel contributions with respect to previously used pipelines, such as using unsupervised feature learning, content-based sound icons, and control of the output space layout. We present an implementation of the framework using the SuperCollider computer music language, and three example prototypes demonstrating its use for data-driven music interfaces. Our results demonstrate the potential of unsupervised machine learning and visualization for creative applications in computer music.
In this paper we describe similarity graphs computed from timefrequency analysis as a guide for audio playback, with the aim of extending the content of fixed recordings in creative applications. We explain the creation of the graph from the distance between spectral frames, as well as several features computed from the graph, such as methods for onset detection, beat detection, and cluster analysis. Several playback algorithms can be devised based on conditional pruning of the graph using these methods. We describe examples for looping, granulation, and automatic montage.
This article posits the notion of the post-acousmatic. It considers the work of contemporary practitioners who are indebted to the Schaefferian heritage, but pursue alternative trajectories from the established canonical discourse of acousmatic music. It will outline the authors’ definition of the term and also outline a network of elements such as time, rhythm, pitch, dynamics, noise and performance to discuss work that the authors’ consider to be a critique, an augmentation and an outgrowth of acousmatic music and thinking.
This article presents a new software toolbox to enable programmatic mining of sound banks for musicking and musicking-driven research. The toolbox is available for three popular creative coding environments currently used by “techno-fluent” musicians. The article describes the design rationale and functionality of the toolbox and its ecosystem, then the development methodology—several versions of the toolbox have been seeded to early adopters who have, in turn, contributed to the design. Examples of these early usages are presented, and we describe some observed musical affordances of the proposed approach to the exploration and manipulation of music corpora, as well as the main roadblocks encountered. We finally reflect on a few emerging themes for the next steps in building a community around critical programmatic mining of sound banks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.