The Level of Service/Case Management Inventory (LS/CMI) and the Youth version (YLS/CMI) generate an assessment of risk/need across eight domains that are considered to be relevant for girls and boys and for women and men. Aggregated across five data sets, the predictive validity of each of the eight domains was gender-neutral. The composite total score (LS/CMI total risk/need) was strongly associated with the recidivism of males (mean r = .39, mean AUC = .746) and very strongly associated with the recidivism of females (mean r = .53, mean AUC = .827). The enhanced validity of LS total risk/need with females was traced to the exceptional validity of Substance Abuse with females. The intra-data set conclusions survived the introduction of two very large samples composed of female offenders exclusively. Finally, the mean incremental contributions of gender and the gender-by-risk level interactions in the prediction of criminal recidivism were minimal compared to the relatively strong validity of the LS/CMI risk level. Although the variance explained by gender was minimal and although high-risk cases were high-risk cases regardless of gender, the recidivism rates of lower risk females were lower than the recidivism rates of lower risk males, suggesting possible implications for test interpretation and policy.
In Study 1, 198 men completed the Right Wing Authoritarianism, Sex Role Ideology, Hostility Towards Women, Acceptance of Interpersonal Violence, Adversarial Sexual Beliefs, and Rape Myth Acceptance scales, as well as measures of past sexually aggressive behavior and likelihood of future sexual aggression. As predicted, authoritarianism and sex role ideology were as closely related to self-reported past and potential future sexually aggressive behavior as were the specifically sexual and aggression-related predictors. Among 134 men in Study 2, authoritarianism and sex guilt positively correlated with each other and with self-reported past sexual aggression. In both studies, the relationship of authoritarianism and sexual aggression was larger in community than in university samples.
MUSIC ELICITS PROFOUND EMOTIONS; HOWEVER, THE time-course of these emotional responses during listening sessions is unclear. We investigated the length of time required for participants to initiate emotional responses ("integration time") to 138 musical samples from a variety of genres by monitoring their real-time continuous ratings of emotional content and arousal level of the musical excerpts (made using a joystick). On average, participants required 8.31 s (SEM = 0.10) of music before initiating emotional judgments. Additionally, we found that: 1) integration time depended on familiarity of songs; 2) soul/funk, jazz, and classical genres were more quickly assessed than other genres; and 3) musicians did not differ significantly in their responses from those with minimal instrumental musical experience. Results were partially explained by the tempo of musical stimuli and suggest that decisions regarding musical structure, as well as prior knowledge and musical preference, are involved in the emotional response to music.
No abstract
Automated music emotion recognition (MER) is a challenging task in Music Information Retrieval (MIR) with wide-ranging applications. Some recent studies pose MER as a continuous regression problem in the ArousalValence (AV) plane. These consist of variations on a common architecture having a universal model of emotional response, a common repertoire of low-level audio features, a bag-of-frames approach to audio analysis, and relatively small data sets. These approaches achieve some success at MER and suggest that further improvements are possible with current technology.Our contribution to the state of the art is to examine just how far one can go within this framework, and to investigate what the limitations of this framework are. We present the results of a systematic study conducted in an attempt to maximize the prediction performance of an automated MER system using the architecture described. We begin with a carefully constructed data set, emphasizing quality over quantity. We address affect induction rather than affect attribution. We consider a variety of algorithms at each stage of the training process, from preprocessing to feature selection and model selection, and we report the results of extensive testing.We found that: (1) none of the variations we considered leads to a substantial improvement in performance, which we present as evidence of a limit on what is achievable under this architecture, and (2) the size of the small data sets that are commonly used in the MER literature limits the possibility of improving the set of features used in MER due to the phenomenon of Subset Selection Bias. We conclude with some proposals for advancing the state of the art.
The training of musicians begins by teaching basic musical concepts, a collection of knowledge commonly known as musicianship. Computer programs designed to implement musical skills (e.g., to make sense of what they hear, perform music expressively, or compose convincing pieces) can similarly benefit from access to a fundamental level of musicianship. Recent research in music cognition, artificial intelligence, and music theory has produced a repertoire of techniques that can make the behavior of computer programs more musical. Many of these were presented in a recently published book/CD-ROM entitled Machine Musicianship. For use in interactive music systems, we are interested in those which are fast enough to run in real time and that need only make reference to the material as it appears in sequence. This talk will review several applications that are able to identify the tonal center of musical material during performance. Beyond this specific task, the design of real-time algorithmic listening through the concurrent operation of several connected analyzers is examined. The presentation includes discussion of a library of C++ objects that can be combined to perform interactive listening and a demonstration of their capability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.