Complex-valued conjugate-symmetric Hadamard transforms ( -CSHT) are variants of complex Hadamard transforms and found applications in signal processing. In addition, their real-valued transform counterparts ( -CSHTs) perform comparably with Hadamard transforms (HTs) despite their lower computational complexity. Closed-form factorizations of -CSHTs and -CSHTs have recently been proposed to make calculations more efficient. However, there is still room to find effective and general factorizations. This paper presents a simple closed-form complete factorization of -CSHTs based on that of -CSHTs. The proposed factorization can be applied to both -and -CSHTs with one factorization and it provides several benefits: 1) It can save total implementation costs for both -CSHTs and -CSHTs; 2) the generalized -CSHT factorization significantly reduces its computational cost; 3) memory-saved local orientation detection of images can be achieved; 4) a fast direction-aware transform can be attained; 5) it clarifies thatand -CSHTs are closely related to common block transforms, such as the discrete Fourier transform (DFT), binDCT, and HT; and 6) it achieves a new integer complex-valued transform, which can approximate the DFT better than the original -CSHT. The image orientation estimation and performance in image coding of our -CSHTs were evaluated through examples of practical applications based on the proposed factorization.
The types of sound events that occur in a situation are limited, and some sound events are likely to co-occur; for instance, "dishes" and "glass jingling." In this paper, we propose a technique of sound event detection utilizing graph Laplacian regularization taking the sound event co-occurrence into account. In the proposed method, sound event occurrences are represented as a graph whose nodes indicate the frequency of event occurrence and whose edges indicate the co-occurrence of sound events. This graph representation is then utilized for sound event modeling, which is optimized under an objective function with a regularization term considering the graph structure. Experimental results obtained using TUT Sound Events 2016 development, 2017 development, and TUT Acoustic Scenes 2016 development indicate that the proposed method improves the detection performance of sound events by 7.9 percentage points compared to that of the conventional CNN-BiGRU-based method in terms of the segment-based F1-score. Moreover, the results show that the proposed method can detect co-occurring sound events more accurately than the conventional method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.