The phenomenon of musical 'consonance' is crucial for many musical styles, determining how notes are organized into scales, how scales are tuned, and how chords are constructed from scales. Western music theory assumes that consonance depends solely on frequency ratios between chord tones; however, psychoacoustic theories predict a dependency also on the 'timbre' (tone color) of the underlying sounds. We investigate this possibility with 24 large-scale behavioral experiments (4,666 participants), constructing detailed continuous maps of consonance judgments for different timbres, and simulating these judgments with representative computational models. We find that timbral manipulations can indeed modify consonance judgments, transforming both the magnitude and the location of consonance peaks. We show how these results shed new light into the mechanisms underlying consonance perception as well as the cultural evolution of scale systems. More broadly, this work showcases how large-scale behavioral experiments can inform classical questions in auditory perception.
The increasing prevalence of Artificial Intelligence (AI) in safety-critical contexts such as air-traffic control leads to systems that are practical and efficient, and to some extent explainable to humans to be trusted and accepted. The present structured literature analysis examines $$n = 236$$ articles on the requirements for the explainability and acceptance of AI. Results include a comprehensive review of $$n = 48$$ articles on information people need to perceive an AI as explainable, the information needed to accept an AI, and representation and interaction methods promoting trust in an AI. Results indicate that the two main groups of users are developers who require information about the internal operations of the model and end users who require information about AI results or behavior. Users’ information needs vary in specificity, complexity, and urgency and must consider context, domain knowledge, and the user’s cognitive resources. The acceptance of AI systems depends on information about the system’s functions and performance, privacy and ethical considerations, as well as goal-supporting information tailored to individual preferences and information to establish trust in the system. Information about the system’s limitations and potential failures can increase acceptance and trust. Trusted interaction methods are human-like, including natural language, speech, text, and visual representations such as graphs, charts, and animations. Our results have significant implications for future human-centric AI systems being developed. Thus, they are suitable as input for further application-specific investigations of user needs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.