The Theoretically Audible, but Practically Inaudible Range (TAPIR) is sound in the highest bandwidth of human hearing; it is barely perceptible by most people but can be transmitted and received by stereotypical transducers. The authors suggest the potential of TAPIR sound as a new medium for music, sonic arts and mobile media.
Social tagging is one of the most popular methods for collecting crowd-sourced information in galleries, libraries, archives, and museums (GLAMs). However, when the number of social tags grows rapidly, using them becomes problematic and, as a result, they are often left as simply big data that cannot be used for practical purposes. To revitalize the use of this crowdsourced information, we propose using social tags to link and cluster artworks based on an experimental study using an online collection at the Gyeonggi Museum of Modern Art (GMoMA). We view social tagging as a folksonomy, where artworks are classified by keywords of the crowd's various interpretations and one artwork can belong to several different categories simultaneously. To leverage this strength of social tags, we used a clustering method called "link communities" to detect overlapping communities in a network of artworks constructed by computing similarities between all artwork pairs. We used this framework to identify semantic relationships and clusters of similar artworks. By comparing the clustering results with curators' manual classification results, we demonstrated the potential of social tagging data for automatically clustering artworks in a way that reflects the dynamic perspectives of crowds.
Objectives: This research aims to apply an auditory display for tumor imaging using fluorescence data, discuss its feasibility for in vivo tumor evaluation, and check its potential for assisting enhanced cancer perception. Methods: Xenografted mice underwent fluorescence imaging after an injection of cy5.5-glucose. Spectral information from the raw data was parametrized to emphasize the near-infrared fluorescence information, and the resulting parameters were mapped to control a sound synthesis engine in order to provide the auditory display. Drag–click maneuvers using in-house data navigation software-generated sound from regions of interest (ROIs) in vivo. Results: Four different representations of the auditory display were acquired per ROI: (1) audio spectrum, (2) waveform, (3) numerical signal-to-noise ratio (SNR), and (4) sound itself. SNRs were compared for statistical analysis. Compared with the no-tumor area, the tumor area produced sounds with a heterogeneous spectrum and waveform, and featured a higher SNR as well (3.63 ± 8.41 vs. 0.42 ± 0.085, p < 0.05). Sound from the tumor was perceived by the naked ear as high-timbred and unpleasant. Conclusions: By accentuating the specific tumor spectrum, auditory display of fluorescence imaging data can generate sound which helps the listener to detect and discriminate small tumorous conditions in living animals. Despite some practical limitations, it can aid in the translation of fluorescent images by facilitating information transfer to the clinician in in vivo tumor imaging.
The act of remixing to create a new cultural product by reconstructing an existing one is being performed in almost all cultural content fields today. Even so, although the remix concept originated in the musical field, theoretical discussion concerning its reconstruction principles in popular music compared to other genres is lacking. This paper analyzes the methods of transforming original songs in remixing. Furthermore, it presents a theoretical basis for its systematic understanding. Ninety-four previously released popular music remixes were selected and compared with the original songs in terms of musical elements, thereby identifying representative types of reconstruction as a new standard for remix music creation. This comparison is then further studied to explore whether applying analytical methods of literary works allows for a deeper understanding of the musical remix process. As a result, musical remix types were categorized as either 1) an “expansion” process that preserves the original accompaniment (i.e., background) and transforms the vocal composition, or 2) a “transposition” process that creates a new accompaniment while preserving the original vocals (i.e., characters). Based on this finding, musical remixing could be described as preserving either the vocals or the accompaniment of the original song while completely transforming the other element. Thus, it maintains the original piece’s identity and aura but simultaneously reveals the difference. These results identify types of remixing popular music based on principles borrowed from a non-musical genre. Furthermore, they analyze existing types and suggest systematic strategies for creating new ones.
The authors present Sound Sketchbook, a mobile phone application featuring real-time sound synthesis based on simple yet evocative cross-modal data mappings. While originally designed as a tool for evaluation of audiovisual correspondences, the application is also appreciated as an enjoyable sound toy and has a strong potential as a multimedia education tool for children. The authors introduce the data mapping strategy of Sound Sketchbook with regard to synesthesia, describe new cross-modal interactions implemented on mobile devices, and discuss the effectiveness of the application based on user survey results.
We investigated the concepts, strategies, and functions of a 3D virtual design environment for collaborative, real-time architectural design using our 3D comparative navigation system and virtual reality technology. The development of the 'comparison' concept has enabled interactive design in real time in a 3D computer environment. Since participants must be able to easily understand the proposed design, systems that help them gain this understanding are required. While comparison is an effective way to gain such an understanding, comparing one proposed design to another using existing systems is difficult because the user must operate their viewpoints separately. We therefore created a prototype system that displays different contents simultaneously while controlling the viewpoints automatically to facilitate content comparison. This comparative navigation system facilitates the comparison of proposed designs by displaying related parts of the designs automatically. In this paper, we describe the concepts, strategies, and functions of a 3D virtual design environment for collaborative, real-time architectural design that is based on our 3D comparative navigation system and real-time simulation technology. We also evaluate the advantages and disadvantages of using this design environment for collaborative architectural design.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.