Although the domestic cat (Felis catus) is probably the most widespread companion animal in the world and interacts in a complex and multifaceted way with humans, the human–cat relationship and reciprocal communication have received far less attention compared, for example, to the human–dog relationship. Only a limited number of studies have considered what people understand of cats’ human-directed vocal signals during daily cat–owner interactions. The aim of the current study was to investigate to what extent adult humans recognize cat vocalizations, namely meows, emitted in three different contexts: waiting for food, isolation, and brushing. A second aim was to evaluate whether the level of human empathy toward animals and cats and the participant’s gender would positively influence the recognition of cat vocalizations. Finally, some insights on which acoustic features are relevant for the main investigation are provided as a serendipitous result. Two hundred twenty-five adult participants were asked to complete an online questionnaire designed to assess their knowledge of cats and to evaluate their empathy toward animals (Animal Empathy Scale). In addition, participants had to listen to six cat meows recorded in three different contexts and specify the context in which they were emitted and their emotional valence. Less than half of the participants were able to associate cats’ vocalizations with the correct context in which they were emitted; the best recognized meow was that emitted while waiting for food. Female participants and cat owners showed a higher ability to correctly classify the vocalizations emitted by cats during brushing and isolation. A high level of empathy toward cats was significantly associated with a better recognition of meows emitted during isolation. Regarding the emotional valence of meows, it emerged that cat vocalizations emitted during isolation are perceived by people as the most negative, whereas those emitted during brushing are perceived as most positive. Overall, it emerged that, although meowing is mainly a human-directed vocalization and in principle represents a useful tool for cats to communicate emotional states to their owners, humans are not particularly able to extract precise information from cats’ vocalizations and show a limited capacity of discrimination based mainly on their experience with cats and influenced by empathy toward them.
Cats employ vocalizations for communicating information, thus their sounds can carry a widerange of meanings. Concerning vocalization, an aspect of increasing relevance directly connected withthe welfare of such animals is its emotional interpretation and the recognition of the production context.To this end, this work presents a proof of concept facilitating the automatic analysis of cat vocalizationsbased on signal processing and pattern recognition techniques, aimed at demonstrating if the emissioncontext can be identified by meowing vocalizations, even if recorded in sub-optimal conditions. Werely on a dataset including vocalizations of Maine Coon and European Shorthair breeds emitted in threedifferent contexts: waiting for food, isolation in unfamiliar environment, and brushing. Towards capturing theemission context, we extract two sets of acoustic parameters, i.e., mel-frequency cepstral coefficients andtemporal modulation features. Subsequently, these are modeled using a classification scheme based ona directed acyclic graph dividing the problem space. The experiments we conducted demonstrate thesuperiority of such a scheme over a series of generative and discriminative classification solutions. Theseresults open up new perspectives for deepening our knowledge of acoustic communication betweenhumans and cats and, in general, between humans and animals.
Sonification is a fairly new term to scientists who are unaware of its multiple use cases. Even if some general definitions of the concept of sonification are commonly accepted, heterogeneous techniques -significantly different as it regards approaches, means and goals -are available. In this work we propose a reference system useful to interpret already-existing sonification instances and to plan new sonification tasks. This work aims to present a reference system for sonification using the inherent properties in the sonic output rather than the data itself. Validation has been conducted by automatically analyzing available experiments and examples, and placing them on the proposed sonification space, according to time-granularity and abstraction-level dimensions. This work can constitute the starting point for future research on computer-assisted sonification. It will be beneficial to a wide range of readers, in particular those from different disciplines looking at new ways to present and analyze data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.