Research suggests that music has a powerful effect on the human mind and body. This paper explores the impact of music as an intervention. For this purpose, the X-System technology is used to curate relaxing and enlivening music playlists designed to positively impact wellbeing and emotional state during the COVID-19 pandemic. A wellbeing model grounded in autopoietic theory of self-organisation in living systems is developed to inform the evaluation of the impact of the intervention and ensure the reliability of the data. More specifically, data quality is enhanced by focusing the participants' awareness on their immediate embodied experience of physical, emotional and relational wellbeing and sense of pleasure/displeasure prior to and after listening to a preferred playlist. The statistical analysis shows significant positive changes in emotional wellbeing, valence and sense of meaning (p < 0.001) with a medium effect size. It also reveals a statistically significant change for physical wellbeing (p = 0.009) with a small effect size. With the relaxing playlists leading to decrease in arousal levels and the enlivening playlists to an increase in activation, it is also concluded that appropriately curated playlists may be able to lead the listener to positive relaxation or activation states or indeed to positive mood change that may have health benefits.
Despite the significant improvement in accuracy supervised learning has brought into person re-identification (re-id), the availability of sufficient fully annotated data from concerned camera-views poses a problem for real-life applications. To alleviate the burden of intensive data annotation, one way is to resort to unsupervised methods. This has motivated us to propose a novel algorithm for unsupervised video-based person re-id applications. To achieve this, the frames of a person video tracklet are divided into a set of clusters that are subsequently matched using a distance measure based on the Naive Bayes nearest neighbor algorithm and Spearman distance. Knowing that person's sequences may suffer from substantial changes in viewpoint, pose, and illumination distortions, our technique allows the rejection of poor and noisy clusters while retaining the most discriminative ones for matching. Experiments on three widely used datasets for video person re-id PRID2011, iLIDS-VID and MARS have been carried out, and the results demonstrate the superiority of the proposed approach.INDEX TERMS Person re-identification, spearman distance, unsupervised method, video surveillance. I. INTRODUCTIONMatching people across cameras is of great interest especially for security applications. When a query person is presented, retrieving that person from a gallery of people captured under a different camera view is known as person re-identification (re-id). In the case where subjects are represented by video sequences, the problem of video-based person re-id is encountered.The past few years have witnessed a large focus on metric learning [1]-[4] and deep learning [5]-[14] to solve the re-id problem. These methods have largely contributed into the advancement of the field by considerably boosting the performance. However, most of these methods require the availability of a sufficient amount of annotated data from concerned camera views to train the model before re-id can take place. This is a hindrance for the applicability of re-id systems into real-world problems. In addition to the high annotation cost, the availability of enough matched instances under the camera views in question is a requirement that is not easily fulfilled. These reasons motivate our work towardsThe associate editor coordinating the review of this manuscript and approving it for publication was Longzhi Yang.
The triplet loss function has seen extensive use within person re-identification. Most works focus on either improving the mining algorithm or adding new terms to the loss function itself. Our work instead concentrates on two other core components of the triplet loss that have been under-researched. First, we improve the standard Euclidean distance with dynamic weights, which are selected based on the standard deviation of features across the batch. Second, we exploit channel attention via a squeeze and excitation unit in the backbone model to emphasise important features throughout all layers of the model. This ensures that the output feature vector is a better representation of the image, and is also more suitable to use within our dynamically weighted Euclidean distance function. We demonstrate that our alterations provide significant performance improvement across popular reidentification data sets, including almost 10% mAP improvement on the CUHK03 data set. The proposed model attains results competitive with many state-of-the-art person re-identification models.
Despite being often considered less challenging than image-based person re-identification (re-id), video-based person re-id is still appealing as it mimics a more realistic scenario owing to the availability of pedestrian sequences from surveillance cameras. In order to exploit the temporal information provided, a number of feature extraction methods have been proposed. Although the features could be equally learned at a significantly higher computational cost, the scarce nature of labelled re-id datasets encourages the development of robust hand-crafted feature representations as an efficient alternative, especially when novel distance metrics or multi-shot ranking algorithms are to be validated. This paper presents a novel hand-crafted feature representation for video-based person re-id based on a 3-dimensional hierarchical Gaussian descriptor. Compared to similar approaches, the proposed descriptor (i) does not require any walking cycle extraction, hence avoiding the complexity of this task, (ii) can be easily fed into off-shelf learned distance metrics, (iii) and consistently achieves superior performance regardless of the matching method adopted. The performance of the proposed method was validated on PRID2011 and iLIDS-VID datasets outperforming similar methods on both benchmarks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.