Image degradation due to wavefront aberrations can be corrected with adaptive optics (AO). In a typical AO configuration, the aberrations are measured directly using a Shack-Hartmann wavefront sensor and corrected with a deformable mirror in order to attain diffraction limited performance for the main imaging system. Wavefront sensor-less adaptive optics (SAO) uses the image information directly to determine the aberrations and provide guidance for shaping the deformable mirror, often iteratively. In this report, we present a Deep Reinforcement Learning (DRL) approach for SAO correction using a custom-built fluorescence confocal scanning laser microscope. The experimental results demonstrate the improved performance of the DRL approach relative to a Zernike Mode Hill Climbing algorithm for SAO.
Chinese and Western Hip Hop musical pieces are clustered using timbre-based Music Information Retrieval (MIR) and machine learning (ML) algorithms. Psychoacoustically motivated algorithms extracting timbre features such as spectral centroid, roughness, sharpness, sound pressure level (SPL), flux, etc. were extracted form 38 contemporary Chinese and 38 Western 'classical' (USA, Germany, France, Great Britain) Hip Hop pieces. All features were integrated over the pieces with respect to mean and standard deviation. A Kohonen self-organizing map, as integrated in the Computational Music and Sound Archive (COMSAR\cite{COMSAR}) and apollon\cite{apollon} framework was used to train different combinations of feature vectors in their mean and standard deviation integrations. No mean was able to cluster the corpora. Still SPL standard deviation perfectly separated Chinese and Western pieces. Spectral flux, sharpness, and spread standard deviation created two sub-cluster within the Western corpus, where only Western pieces had strong values there. Spectral centroid std did sub-cluster the Chinese Hip Hop pieces, where again only Chinese pieces had strong values. These findings point to different production, composition, or mastering strategies. E.g. the clear SPL-caused clusters point to the loudness-war of contemporary mastering, using massive compression to achieve high perceived loudness.
The music of Northern Myanmar Kachin ethnic group is compared to the music of western China, Xijiang based Uyghur music, using timbre and pitch feature extraction and machine learning. Although separated by Tibet, the muqam tradition of Xinjiang might be found in Kachin music due to myths of Kachin origin, as well as linguistic similarities, e.g., the Kachin term 'makan' for a musical piece. Extractions were performed using the apollon and COMSAR (Computational Music and Sound Archiving) frameworks, on which the Ethnographic Sound Recordings Archive (ESRA) is based, using ethnographic recordings from ESRA next to additional pieces. In terms of pitch, tonal systems were compared using Kohonen self-organizing map (SOM), which clearly clusters Kachin and Uyghur musical pieces. This is mainly caused by the Xinjiang muqam music showing just fifth and fourth, while Kachin pieces tend to have a higher fifth and fourth, next to other dissimilarities. Also, the timbre features of spectral centroid and spectral sharpness standard deviation clearly tells Uyghur from Kachin pieces, where Uyghur music shows much larger deviations. Although more features will be compared in the future, like rhythm or melody, these already strong findings might introduce an alternative comparison methodology of ethnic groups beyond traditional linguistic definitions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.