Objective Support Vector Machines (SVM) have developed into a gold standard for accurate classification in Brain-Computer-Interfaces (BCI). The choice of the most appropriate classifier for a particular application depends on several characteristics in addition to decoding accuracy. Here we investigate the implementation of Hidden Markov Models (HMM)for online BCIs and discuss strategies to improve their performance. Approach We compare the SVM, serving as a reference, and HMMs for classifying discrete finger movements obtained from the Electrocorticograms of four subjects doing a finger tapping experiment. The classifier decisions are based on a subset of low-frequency time domain and high gamma oscillation features. Main results We show that decoding optimization between the two approaches is due to the way features are extracted and selected and less dependent on the classifier. An additional gain in HMM performance of up to 6% was obtained by introducing model constraints. Comparable accuracies of up to 90% were achieved with both SVM and HMM with the high gamma cortical response providing the most important decoding information for both techniques. Significance We discuss technical HMM characteristics and adaptations in the context of the presented data as well as for general BCI applications. Our findings suggest that HMMs and their characteristics are promising for efficient online brain-computer interfaces.
For C-arm-based CB-CT, the algorithm presented here provides a solution for resorting to model-based perfusion reconstruction without its connected high computational cost. Thus, this algorithm is potentially able to have recourse to the benefit from model-based perfusion imaging for practical application. This study is a proof of concept.
The polychromatic X-ray spectrum and the energy-dependent attenuation coefficient of materials cause beam hardening artifacts in CT reconstructed volumes. These artifacts appear as cupping and streak artifacts depending on the material composition and the geometry of the imaged object. CT scanners employ projection linearization to transform polychromatic attenuation to monochromatic attenuation using a polynomial model. Polynomial coefficients are computed during calibration or using prior information such as X-ray spectrum and attenuation properties of the materials. In this paper, we are presenting a novel method to correct beam hardening artifacts by enforcing cone beam consistency conditions on the projection data. We used consistency conditions derived from Grangeat's fundamental relation between cone beam projection data and 3-D Radon transform. The optimal polynomial coefficients for artifact reduction are iteratively estimated by minimizing the inconsistency of a set of projection pairs. The results from simulated and real datasets show the visible reduction of artifacts. Our studies also demonstrate the robustness of the algorithm when the projections are perturbed with other physical measurement and geometrical errors. The proposed method requires neither calibration nor prior information like X-ray spectrum, attenuation properties of the materials and detector response. The algorithm can be used for beam hardening correction in clinical, pre-clinical, and industrial CT systems.
Model-based reconstruction employing the time separation technique (TST) was found to improve dynamic perfusion imaging of the liver using C-arm cone-beam computed tomography (CBCT). To apply TST using prior knowledge extracted from CT perfusion data, the liver should be accurately segmented from the CT scans. Reconstructions of primary and model-based CBCT data need to be segmented for proper visualisation and interpretation of perfusion maps. This research proposes Turbolift learning, which trains a modified version of the multi-scale Attention UNet on different liver segmentation tasks serially, following the order of the trainings CT, CBCT, CBCT TST -making the previous trainings act as pre-training stages for the subsequent ones -addressing the problem of limited number of datasets for training. For the final task of liver segmentation from CBCT TST, the proposed method achieved an overall Dice scores of 0.874±0.031 and 0.905±0.007 in 6-fold and 4-fold cross-validation experiments, respectively -securing statistically significant improvements over the model, which was trained only for that task. Experiments revealed that Turbolift not only improves the overall performance of the model but also makes it robust against artefacts originating from the embolisation materials and truncation artefacts. Additionally, in-depth analyses confirmed the order of the segmentation tasks. This paper shows the potential of segmenting the liver from CT, CBCT, and CBCT TST, learning from the available limited training data, which can possibly be used in the future for the visualisation and evaluation of the perfusion maps for the treatment evaluation of liver diseases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.