To set the values of the hyperparameters of a support vector machine (SVM), the method of choice is cross-validation. Several upper bounds on the leave-one-out error of the pattern recognition SVM have been derived. One of the most popular is the radius-margin bound. It applies to the hard margin machine, and, by extension, to the 2-norm SVM. In this article, we introduce the first quadratic loss multi-class SVM: the M-SVM 2. It can be seen as a direct extension of the 2-norm SVM to the multi-class case, which we establish by deriving the corresponding generalized radius-margin bound.
The hidden Markov models (HMMs) are state-space models widely applied in time series analysis. Wellknown Bayesian state estimation methods designed for HMMs, such as the Baum-Welch algorithm and the Viterbi algorithm, allow state estimation with a complexity linear in the sample size. We consider recent extensions of HMMs, specifically the pairwise Markov models (PMMs) and the triplet Markov models (TMMs), in which the Baum-Welch algorithm also has a complexity linear in the sample size. However, the state process is not necessarily Markovian in PMMs and TMMs, which offers a considerable flexibility of modeling. This study explores potential performance gains achievable if PMMs and TMMs are used to describe the state-space system rather than HMMs. This is done through extensive comparative Monte-Carlo experiments among HMMs, PMMs and TMMs in the case of discrete state space models. A simple comparative example of the use of PMMs and HMMs to predict market direction is also given. These experiments confirm the interest of PMMs and TMMs in the time series modeling: specifically, the classification rate can be improved by nearly fifty percent. These findings mean that PMMs and TMMs may be more suitable than classic HMMs for real-world applications.
We evaluated the accuracy of the proposed HMT-based framework for PET/CT image segmentation. The proposed method reached good accuracy, especially with pre-processing in the contourlet domain.
The latest developments in Markov models' theory and their corresponding computational techniques have opened new rooms for image and signal modeling. In particular, the use of Dempster-Shafer theory of evidence within Markov models has brought some keys to several challenging difficulties that the conventional hidden Markov models cannot handle. These difficulties are concerned mainly with two situations: multisensor data, where the use of the Dempster-Shafer fusion is unworkable; and nonstationary data, due to the mismatch between the estimated stationary model and the actual data. For each of the two situations, the Dempster-Shafer combination rule has been applied, thanks to the triplet Markov models' formalism, to overcome the drawbacks of the standard Bayesian models. However, so far, both situations have not been considered in the same time. In this article, we propose an evidential Markov chain that uses the Dempster-Shafer combination rule to bring the effect of contextual information into segmentation of multisensor nonstationary data. We also provide the Expectation-Maximization parameters' estimation and the maximum posterior marginal's restoration procedures. To validate the proposed model, experiments are conducted on some synthetic multisensor data and noised images. The obtained segmentation results are then compared to those obtained with conventional approaches to bring out the efficiency of the present model.
Abstract-Hidden Markov models are very robust and have been widely used in a wide range of application fields; however, they can prove some limitations for data restoration under some complex situations. These latter include cases when the data to be recovered are nonstationary. The recent triplet Markov models have overcome such difficulty thanks to their rich formalism, that allows considering more complex data structures while keeping the computational complexity of the different algorithms linear to the data size. In this letter, we propose a new triplet Markov chain that allows the unsupervised restoration of random discrete data hidden with switching noise distributions. We also provide genuine parameters estimation and MPM restoration algorithms. The new model is validated through experiments conducted on synthetic data and on real images, whose results show its interest with respect to the standard hidden Markov chain.
Knowledge of vertebra location, shape, and orientation is crucial in many medical applications such as orthopedics or interventional procedures. Computed tomography (CT) offers a high contrast between bone and soft tissues, but automatic vertebra segmentation remains difficult. Hence, the wide range of shapes, aging, and degenerative joint disease alterations as well as the variety of pathological cases encountered in an aging population make automatic segmentation sometimes challenging. Besides, daily practice implies a need for affordable computation time.This paper aims to present a new automated vertebra segmentation method (using a first bounding box for initialization) for CT 3D data which tackles these problems. This method is based on two consecutive steps. The first one is a new coarse-to-fine method efficiently reducing the data amount to obtain a coarse shape of the vertebra. The second step consists in a hidden Markov chain (HMC) segmentation using a specific volume transformation within a Bayesian framework. Our method does not introduce any prior on the expected shape of the vertebra within the bounding box and thus deals with the most frequent pathological cases encountered in daily practice.We experiment this method on a set of standard lumbar, thoracic, and cervical vertebrae and on a public dataset, on pathological cases, and in a simple integration example. Quantitative and qualitative results show that our method is robust to changes in shapes and luminance and provides correct segmentation with respect to pathological cases.
International audienceThe circum-galactic medium consists in gas orbiting around galaxies, whose faintness prevents any complete and easy detection. A powerful tool to detect such pattern can be found in using hyperspectral imaging. Nevertheless, detection in hyperspectral datacubes faces various problems, including well-fitted signal and noise descriptions to ensure further discrimination. A specificity of astronomical images resides in dealing with faint and very noisy signals. In this paper, we introduce a new constrained generalized likelihood ratio test adapted to the problem and a compound test to exploit most of the available information. We also investigate the use of both spatial information and multiple observations on a single scene, to enhance robustness. Numerical experiments on synthetic data are performed to quantify the gain of the different approaches. Finally, results on real hyperspectral astronomical data are presented, which may map for the first time observation of the circum-galactic medium around faint and distant galaxie
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.