This paper presents a new module for heart sounds segmentation based on S-Transform. The heart sounds segmentation process segments the PhonoCardioGram (PCG) signal into four parts: S1 (first heart sound), systole, S2 (second heart sound) and diastole. It can be considered one of the most important phases in the auto-analysis of PCG signals. The proposed segmentation module can be divided into three main blocks: localization of heart sounds, boundaries detection of the localized heart sounds and classification block to distinguish between S1 and S2. An
To cite this version:
ABSTRACTIn this paper, we propose a new approach which addresses the Positive Unlabeled learning challenge for image classification. Its functioning is based on GAN abilities in order to generate fake images samples whose distribution gets closer to negative samples distribution included in the unlabeled dataset available, while being different to the distribution of the unlabeled positive samples. Then we train a CNN classifier with the positive samples and the fake generated samples, as it would be done with a classic Positive Negative dataset. The tests performed on three different image classification datasets show that the system is stable up to an acceptable fraction of positive samples present in the unlabeled dataset. Although very different, this method outperforms the state of the art PU learning on the RGB dataset CIFAR-10.
Noisy labeled learning methods deal with training datasets containing corrupted labels. However, prediction performances of existing methods on small datasets still leave room for improvements. With this objective, in this paper we present a GAN-based method to generate a clean augmented training dataset from a small and noisy labeled dataset. The proposed approach combines noisy labeled learning principles with GAN state-of-the-art techniques. We demonstrate the usefulness of the proposed approach through an empirical study on simple and complex image datasets.
3-D optical fluorescent microscopy becomes now an efficient tool for volumic investigation of living biological samples. The 3-D data can be acquired by Optical Sectioning Microscopy which is performed by axial stepping of the object versus the objective. For any instrument, each recorded image can be described by a convolution equation between the original object and the Point Spread Function (PSF) of the acquisition system. To assess performance and ensure the data reproducibility, as for any 3-D quantitative analysis, the system indentification is mandatory. The PSF explains the properties of the image acquisition system; it can be computed or acquired experimentally. Statistical tools and Zernike moments are shown appropriate and complementary to describe a 3-D system PSF and to quantify the variation of the PSF as function of the optical parameters. Some critical experimental parameters can be identified with these tools. This is helpful for biologist to define an acquisition protocol optimizing the use of the system. Reduction of out-of-focus light is the task of 3 -D microscopy; it is carried out computationally by deconvolution process. Pre-filtering the images improves the stability of deconvolution results, now less dependent on the regularization parameter; this helps the biologists to use restoration process.
Performing specific object detection and recognition at the imaging sensor level, raises many technical and scientific challenges. Today state-of-the-art detection performances are obtained with Deep Convolutional Neural Network (CNN) models. However reaching the expected CNN behavior in terms of sensitivity and specificity require to master the training dataset. We explore in this paper, a new way of acquiring images of military vehicles in sanitized and controlled conditions of the laboratory in order to train a CNN to recognize the same visual signature with real vehicles in realistic outdoor situations. By combining sanitized images, counter-examples and different data augmentation techniques, our investigations aim at reducing the needs of complex outdoor image acquisition. First results demonstrate the feasibility to detect and classify, in real situations, military vehicles by exploiting only signatures from miniature models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.