Typically, prostate evaluation is done by using different imaging sequences of magnetic resonance imaging. Dynamic contrast enhancement, one of such scanning modalities, allow to spot higher vascular permeability and density caused by the malignant tissue. Authors of this paper investigate the ability to identify malignant prostate regions by the functional data analysis and standard machine learning techniques. The dynamic contrast enhanced images of the prostate are divided into the regions and based on those time-signal intensity curves are calculated. Two classification approaches: functional k-Nearest Neighbors and machine learning Support Vector Machine are used to model signal curve behavior on temporal variation matrix and timestamp based prostate region division of image data. Preliminary research shows that both functional data analysis and machine learning classification methods are able to identify highest saturation timestamp that gives best tissue classification results on timestamp based dynamic contrast enhanced region map obtained by Simple Linear Iterative Clustering algorithm. Cancer region classification results are better when the dynamic contrast enhanced images are subdivided into regions at each timestamp than when using a temporal variation matrix.
In this research, a study of cross-linguistic speech emotion recognition is performed. For this purpose, emotional data of different languages (English, Lithuanian, German, Spanish, Serbian, and Polish) are collected, resulting in a cross-linguistic speech emotion dataset with the size of more than 10.000 emotional utterances. Despite the bi-modal character of the databases gathered, our focus is on the acoustic representation only. The assumption is that the speech audio signal carries sufficient emotional information to detect and retrieve it. Several two-dimensional acoustic feature spaces, such as cochleagrams, spectrograms, mel-cepstrograms, and fractal dimension-based space, are employed as the representations of speech emotional features. A convolutional neural network (CNN) is used as a classifier. The results show the superiority of cochleagrams over other feature spaces utilized. In the CNN-based speaker-independent cross-linguistic speech emotion recognition (SER) experiment, the accuracy of over 90% is achieved, which is close to the monolingual case of SER.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.