Singing voice detection or vocal detection is a classification task that determines whether a given audio segment contains singing voices. This task plays a very important role in vocal-related music information retrieval tasks, such as singer identification. Although humans can easily distinguish between singing and nonsinging parts, it is still very difficult for machines to do so. Most existing methods focus on audio feature engineering with classifiers, which rely on the experience of the algorithm designer. In recent years, deep learning has been widely used in computer hearing. To extract essential features that reflect the audio content and characterize the vocal context in the time domain, this study adopted a long-term recurrent convolutional network (LRCN) to realize vocal detection. The convolutional layer in LRCN functions in feature extraction, and the long short-term memory (LSTM) layer can learn the time sequence relationship. The preprocessing of singing voices and accompaniment separation and the postprocessing of time-domain smoothing were combined to form a complete system. Experiments on five public datasets investigated the impacts of the different features for the fusion, frame size, and block size on LRCN temporal relationship learning, and the effects of preprocessing and postprocessing on performance, and the results confirm that the proposed singing voice detection algorithm reached the state-of-the-art level on public datasets.
We propose a new framework for automatic image annotation (AIA) of regions through segmentation based semantic analysis and discriminative classification. Given a test image, it is first segmented by a proposed texture-enhanced JSEG algorithm. Then these regions are represented by an extended bag-of-words model in which a feature vector, based on a visual lexicon with its vocabulary consisting of a visual word or a co-occurrence of multiple visual words, is constructed to represent the region content. Finally a concept classifier learned by a maximal figure-of-merit algorithm is used to predict the region labels. These models are discriminatively trained from image regions with multiple associations between regions and concepts. Experiments on a subset of the Corel 5K data set illustrate that our proposed approach to region AIA achieves more accurate annotation results than some sate-of-the-art algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.