SummaryThe results of treating 143 patients with trigeminal neuralgia with carbamazepine (CBZ) over a 16-year period have been reviewed. The drug was effective initially with few mild side effects in 99 patients (69 %). Of these, 19 developed resistance later, i.e. between 2 months and 10 years after commencing treatment, and required alternative measures. Of the remaining 80 (56 %), the drug was effective in 49 for 1-4 years and in 31 for 5-16 years. Thirty-six patients (25%) failed to respond to CBZ initially and required alternative measures, as did 8 (6 %) who were intolerant of the drug. One patient developed CBZ-induced water intoxication with hyponatraemia. Subsequently hyponatraemia was excluded in 17 patients who had been taking CBZ for between 4 months and 7 years. This study has thus confirmed the efficacy of CBZ in the treatment of trigeminal neuralgia and shown that it may continue to be effective for many years.
BackgroundSemi-quantification methods are well established in the clinic for assisted reporting of (I123) Ioflupane images. Arguably, these are limited diagnostic tools. Recent research has demonstrated the potential for improved classification performance offered by machine learning algorithms. A direct comparison between methods is required to establish whether a move towards widespread clinical adoption of machine learning algorithms is justified.This study compared three machine learning algorithms with that of a range of semi-quantification methods, using the Parkinson’s Progression Markers Initiative (PPMI) research database and a locally derived clinical database for validation. Machine learning algorithms were based on support vector machine classifiers with three different sets of features:Voxel intensitiesPrincipal components of image voxel intensitiesStriatal binding radios from the putamen and caudate. Semi-quantification methods were based on striatal binding ratios (SBRs) from both putamina, with and without consideration of the caudates. Normal limits for the SBRs were defined through four different methods:Minimum of age-matched controlsMean minus 1/1.5/2 standard deviations from age-matched controlsLinear regression of normal patient data against age (minus 1/1.5/2 standard errors)Selection of the optimum operating point on the receiver operator characteristic curve from normal and abnormal training data Each machine learning and semi-quantification technique was evaluated with stratified, nested 10-fold cross-validation, repeated 10 times.ResultsThe mean accuracy of the semi-quantitative methods for classification of local data into Parkinsonian and non-Parkinsonian groups varied from 0.78 to 0.87, contrasting with 0.89 to 0.95 for classifying PPMI data into healthy controls and Parkinson’s disease groups. The machine learning algorithms gave mean accuracies between 0.88 to 0.92 and 0.95 to 0.97 for local and PPMI data respectively.ConclusionsClassification performance was lower for the local database than the research database for both semi-quantitative and machine learning algorithms. However, for both databases, the machine learning methods generated equal or higher mean accuracies (with lower variance) than any of the semi-quantification approaches. The gain in performance from using machine learning algorithms as compared to semi-quantification was relatively small and may be insufficient, when considered in isolation, to offer significant advantages in the clinical context.
Aims Pulmonary arterial hypertension (PAH) is a progressive condition with high mortality. Quantitative cardiovascular magnetic resonance (CMR) imaging metrics in PAH target individual cardiac structures and have diagnostic and prognostic utility but are challenging to acquire. The primary aim of this study was to develop and test a tensor-based machine learning approach to holistically identify diagnostic features in PAH using CMR, and secondarily, visualize and interpret key discriminative features associated with PAH. Methods and results Consecutive treatment naive patients with PAH or no evidence of pulmonary hypertension (PH), undergoing CMR and right heart catheterization within 48 h, were identified from the ASPIRE registry. A tensor-based machine learning approach, multilinear subspace learning, was developed and the diagnostic accuracy of this approach was compared with standard CMR measurements. Two hundred and twenty patients were identified: 150 with PAH and 70 with no PH. The diagnostic accuracy of the approach was high as assessed by area under the curve at receiver operating characteristic analysis (P < 0.001): 0.92 for PAH, slightly higher than standard CMR metrics. Moreover, establishing the diagnosis using the approach was less time-consuming, being achieved within 10 s. Learnt features were visualized in feature maps with correspondence to cardiac phases, confirming known and also identifying potentially new diagnostic features in PAH. Conclusion A tensor-based machine learning approach has been developed and applied to CMR. High diagnostic accuracy has been shown for PAH diagnosis and new learnt features were visualized with diagnostic potential.
BackgroundFor (123I)FP-CIT imaging, a number of algorithms have shown high performance in distinguishing normal patient images from those with disease, but none have yet been tested as part of reporting workflows. This study aims to evaluate the impact on reporters’ performance of a computer-aided diagnosis (CADx) tool developed from established machine learning technology.Three experienced (123I)FP-CIT reporters (two radiologists and one clinical scientist) were asked to visually score 155 reconstructed clinical and research images on a 5-point diagnostic confidence scale (read 1). Once completed, the process was then repeated (read 2). Immediately after submitting each image score for a second time, the CADx system output was displayed to reporters alongside the image data. With this information available, the reporters submitted a score for the third time (read 3). Comparisons between reads 1 and 2 provided evidence of intra-operator reliability, and differences between reads 2 and 3 showed the impact of the CADx.ResultsThe performance of all reporters demonstrated a degree of variability when analysing images through visual analysis alone. However, inclusion of CADx improved consistency between reporters, for both clinical and research data. The introduction of CADx increased the accuracy of the radiologists when reporting (unfamiliar) research images but had less impact on the clinical scientist and caused no significant change in accuracy for the clinical data.ConclusionsThe outcomes for this study indicate the value of CADx as a diagnostic aid in the clinic and encourage future development for more refined incorporation into clinical practice.
Objectives: The aim of this study is to develop a scar detection method for routine computed tomography angiography (CTA) imaging using deep convolutional neural networks (CNN), which relies solely on anatomical information as input and is compatible with existing clinical workflows.Background: Identifying cardiac patients with scar tissue is important for assisting diagnosis and guiding interventions. Late gadolinium enhancement (LGE) magnetic resonance imaging (MRI) is the gold standard for scar imaging; however, there are common instances where it is contraindicated. CTA is an alternative imaging modality that has fewer contraindications and is faster than Cardiovascular magnetic resonance imaging but is unable to reliably image scar.Methods: A dataset of LGE MRI (200 patients, 83 with scar) was used to train and validate a CNN to detect ischemic scar slices using segmentation masks as input to the network. MRIs were segmented to produce 3D left ventricle meshes, which were sampled at points along the short axis to extract anatomical masks, with scar labels from LGE as ground truth. The trained CNN was tested with an independent CTA dataset (25 patients, with ground truth established with paired LGE MRI). Automated segmentation was performed to provide the same input format of anatomical masks for the network. The CNN was compared against manual reading of the CTA dataset by 3 experts.Results: Note that 84.7% cross-validated accuracy (AUC: 0.896) for detecting scar slices in the left ventricle on the MRI data was achieved. The trained network was tested against the CTA-derived data, with no further training, where it achieved an 88.3% accuracy (AUC: 0.901). The automated pipeline outperformed the manual reading by clinicians.Conclusion: Automatic ischemic scar detection can be performed from a routine cardiac CTA, without any scar-specific imaging or contrast agents. This requires only a single acquisition in the cardiac cycle. In a clinical setting, with near zero additional cost, scar presence could be detected to triage images, reduce reading times, and guide clinical decision-making.
3D reconstruction and 3D printing of subject-specific anatomy is a promising technology for supporting clinicians in the visualisation of disease progression and planning for surgical intervention. In this context, the 3D model is typically obtained from segmentation of magnetic resonance imaging (MRI), computed tomography (CT) or echocardiography images. Although these modalities allow imaging of the tissues in vivo, assessment of quality of the reconstruction is limited by the lack of a reference geometry as the subject-specific anatomy is unknown prior to image acquisition. In this work, an optical method based on 3D digital image correlation (3D-DIC) techniques is used to reconstruct the shape of the surface of an ex vivo porcine heart. This technique requires two digital charge-coupled device (CCD) cameras to provide full-field shape measurements and to generate a standard tessellation language (STL) file of the sample surface. The aim of this work was to quantify the error of 3D-DIC shape measurements using the additive manufacturing process. The limitations of 3D printed object resolution, the discrepancy in reconstruction of the surface of cardiac soft tissue and a 3D printed model of the same surface were evaluated. The results obtained demonstrated the ability of the 3D-DIC technique to reconstruct localised and detailed features on the cardiac surface with sub-millimeter accuracy.
Mersilene showed favorable results in wrapping of hydroxyapatite orbital implants.
Machine learning promises much in the field of radiology, both in terms of software that can directly analyse patient data and algorithms that can automatically perform other processes in the reporting pipeline. However, clinical practice remains largely untouched by such technology. This article highlights what we consider to be the major obstacles to widespread clinical adoption of machine learning software, namely: representative data and evidence, regulations, health economics, heterogeneity of the clinical environment and support and promotion. We argue that these issues are currently so substantial that machine learning will struggle to find acceptance beyond the narrow group of applications where the potential benefits are readily evident. In order that machine learning can fulfil its potential in radiology, a radical new approach is needed, where significant resources are directed at reducing impediments to translation rather than always being focused solely on development of the technology itself.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.