ACII is the premier international forum for presenting the latest research on affective computing. In this work, we monitor, quantify and reflect on the diversity in ACII conference across time by computing a set of indexes. We measure diversity in terms of gender, geographic location and academia vs research centres vs industry, and consider three different actors: authors, keynote speakers and organizers. Results raise awareness on the limited diversity in the field, in all studied facets, and compared to other AI conferences. While gender diversity is relatively high, equality is far from being reached. The community is dominated by European, Asian and North American researchers, leading the rest of continents under-represented. There is also a strong absence of companies and research centres focusing on applied research and products. This study fosters discussion in the community on the need for diversity and related challenges in terms of minimizing potential biases of the developed systems to the represented groups. We intend our paper to contribute with a first analysis to consider as a monitoring tool when implementing diversity initiatives. The data collected for this study are publicly released through the European divinAI initiative.
DivinAI is an open and collaborative initiative promoted by the European Commission's Joint Research Centre to measure and monitor diversity indicators related to AI conferences, with special focus on gender balance, geographical representation, and presence of academia vs companies. This paper summarizes the main achievements and lessons learnt during the first year of life of the DivinAI project, and proposes a set of recommendations for its further development and maintenance by the AI community.
We present a platform and a dataset to help research on Music Emotion Recognition (MER). We developed the Music Enthusiasts platform aiming to improve the gathering and analysis of the so-called “ground truth” needed as input to MER systems. Firstly, our platform involves engaging participants using citizen science strategies and generate music emotion annotations – the platform presents didactic information and musical recommendations as incentivization, and collects data regarding demographics, mood, and language from each participant. Participants annotated each music excerpt with single free-text emotion words (in native language), distinct forced-choice emotion categories, preference, and familiarity. Additionally, participants stated the reasons for each annotation – including those distinctive of emotion perception and emotion induction. Secondly, our dataset was created for personalized MER and contains information from 181 participants, 4721 annotations, and 1161 music excerpts. To showcase the use of the dataset, we present a methodology for personalization of MER models based on active learning. The experiments show evidence that using the judgment of the crowd as prior knowledge for active learning allows for more effective personalization of MER systems for this particular dataset. Our dataset is publicly available and we invite researchers to use it for testing MER systems.
Shared practices to assess the diversity of retrieval system results are still debated in the Information Retrieval community, partly because of the challenges of determining what diversity means in specific scenarios, and of understanding how diversity is perceived by end-users. The field of Music Information Retrieval is not exempt from this issue. Even if fields such as Musicology or Sociology of Music have a long tradition in questioning the representation and the impact of diversity in cultural environments, such knowledge has not been yet embedded into the design and development of music technologies. In this paper, focusing on electronic music, we investigate the characteristics of listeners, artists, and tracks that are influential in the perception of diversity. Specifically, we center our attention on 1) understanding the relationship between perceived diversity and computational methods to measure diversity, and 2) analyzing how listeners' domain knowledge and familiarity influence such perceived diversity. To accomplish this, we design a user-study in which listeners are asked to compare pairs of lists of tracks and artists, and to select the most diverse list from each pair. We compare participants' ratings with results obtained through computational models built using audio tracks' features and artist attributes. We find that such models are generally aligned with participants' choices when most of them agree that one list is more diverse than the other, while they present a mixed behaviour in cases where participants have little agreement. Moreover, we observe how differences in domain knowledge, familiarity, and demographics can influence the level of agreement among listeners, and between listeners and diversity metrics computed automatically.
Ranking, recommendation, and retrieval systems are widely used in online platforms and other societal systems, including e-commerce, media-streaming, admissions, gig platforms, and hiring. In the recent past, a large "fair ranking" research literature has been developed around making these systems fair to the individuals, providers, or content that are being ranked. Most of this literature defines fairness for a single instance of retrieval, or as a simple additive notion for multiple instances of retrievals over time. This work provides a critical overview of this literature, detailing the often context-specific concerns that such an approach misses: the gap between high ranking placements and true provider utility, spillovers and compounding effects over time, induced strategic incentives, and the effect of statistical uncertainty. We then provide a path forward for a more holistic and impact-oriented fair ranking research agenda, including methodological lessons from other fields and the role of the broader stakeholder community in overcoming data bottlenecks and designing effective regulatory environments.
The understanding of the emotions in music has motivated research across diverse areas of knowledge for decades. In the field of computer science, there is a particular interest in developing algorithms to "predict" the emotions in music perceived by or induced to a listener. However, the gathering of reliable "ground truth" data for modeling the emotional content of music poses challenges, since tasks related with annotations of emotions are time consuming, expensive and cognitively demanding due to its inherent subjectivity and its cross-disciplinary nature. Citizen science projects have proven to be a useful approach to solve these types of problems where there is a need for recruiting collaborators for massive scale tasks. We developed a platform for annotating emotional content in musical pieces following a citizen science approach, to benefit not only the researchers, who benefit from the generated dataset, but also the volunteers, who are engaged to collaborate on the research project, not only by providing annotations but also through their self and community-awareness about the emotional perception of the music. Likewise, gamification mechanisms motivate the participants to explore and discover new music based on the emotional content. Preliminary user evaluations showed that the platform design is in line with the motivations of the general public, and that the citizen science approach offers an iterative refinement to enhance the quantity and quality of contributions by involving volunteers in the design process. The usability of the platform was acceptable, although some of the features require improvements.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.