Diffusion MRI is being used increasingly in studies of the brain and other parts of the body for its ability to provide quantitative measures that are sensitive to changes in tissue microstructure. However, inter-scanner and inter-protocol differences are known to induce significant measurement variability, which in turn jeopardises the ability to obtain ‘truly quantitative measures’ and challenges the reliable combination of different datasets. Combining datasets from different scanners and/or acquired at different time points could dramatically increase the statistical power of clinical studies, and facilitate multi-centre research. Even though careful harmonisation of acquisition parameters can reduce variability, inter-protocol differences become almost inevitable with improvements in hardware and sequence design over time, even within a site. In this work, we present a benchmark diffusion MRI database of the same subjects acquired on three distinct scanners with different maximum gradient strength (40, 80, and 300 mT/m), and with ‘standard’ and ‘state-of-the-art’ protocols, where the latter have higher spatial and angular resolution. The dataset serves as a useful testbed for method development in cross-scanner/cross-protocol diffusion MRI harmonisation and quality enhancement. Using the database, we compare the performance of five different methods for estimating mappings between the scanners and protocols. The results show that cross-scanner harmonisation of single-shell diffusion data sets can reduce the variability between scanners, and highlight the promises and shortcomings of today's data harmonisation techniques.
The predictive performance of supervised learning algorithms depends on the quality of labels. In a typical label collection process, multiple annotators provide subjective noisy estimates of the "truth" under the influence of their varying skill-levels and biases. Blindly treating these noisy labels as the ground truth limits the accuracy of learning algorithms in the presence of strong disagreement. This problem is critical for applications in domains such as medical imaging where both the annotation cost and inter-observer variability are high. In this work, we present a method for simultaneously learning the individual annotator model and the underlying true label distribution, using only noisy observations. Each annotator is modeled by a confusion matrix that is jointly estimated along with the classifier predictions. We propose to add a regularization term to the loss function that encourages convergence to the true annotator confusion matrix. We provide a theoretical argument as to how the regularization is essential to our approach both for the case of single annotator and multiple annotators. Despite the simplicity of the idea, experiments on image classification tasks with both simulated and real labels show that our method either outperforms or performs on par with the state-of-the-art methods and is capable of estimating the skills of annotators even with a single label available per image. F.3. Cross-entropy loss with sparse and noisy labels1 import numpy as np 2 import tensorflow as tf 3 4 def cross_entropy_over_annotators(labels, logits, confusion_matrices): 5 """ Cross entropy between noisy labels from multiple annotators and their confusion matrix models. 6 Args: 7 labels: One-hot representation of labels from multiple annotators. 8 tf.Tensor of size [batch, num_annotators, num_classes]. Missing labels are assumed to be 9 represented as zero vectors. 10 logits: Logits from the classifier. tf.Tensor of size [batch, num_classes] 11 confusion_matrices: Confusion matrices of annotators. tf.Tensor of size 12 [num annotators, num_classes, num_classes]. The (i, j) th element of the confusion matrix 13 for annotator a denotes the probability P(label_annotator_a = j|label_true = i).14 Returns: 15 The average cross-entropy across annotators and image examples.16 """ 17 # Treat one-hot labels as probability vectors 18 labels = tf.cast(labels, dtype=tf.float32) 19 20 # Sequentially compute the loss for each annotator 21 losses_all_annotators = [] 22 for idx, labels_annotator in enumerate(tf.unstack(labels, axis=1)): 23 loss = sparse_confusion_matrix_softmax_cross_entropy( 24 labels=labels_annotator, 25 logits=logits, 26 confusion_matrix=confusion_matrices[idx, :, :], 27 ) 28 losses_all_annotators.append(loss) 29 30 # Stack them into a tensor of size (batch, num_annotators) 31 losses_all_annotators = tf.stack(losses_all_annotators, axis=1) 32 33 # Filter out annotator networks with no labels. This allows you train 34 # annotator networks only when the labels are available. 35 has_labels = tf.reduce_sum(labels...
In this work, we investigate the value of uncertainty modelling in 3D super-resolution with convolutional neural networks (CNNs). Deep learning has shown success in a plethora of medical image transformation problems, such as super-resolution (SR) and image synthesis. However, the highly ill-posed nature of such problems results in inevitable ambiguity in the learning of networks. We propose to account for intrinsic uncertainty through a per-patch heteroscedastic noise model and for parameter uncertainty through approximate Bayesian inference in the form of variational dropout. We show that the combined benefits of both lead to the state-of-the-art performance SR of diffusion MR brain images in terms of errors compared to ground truth. We further show that the reduced error scores produce tangible benefits in downstream tractography. In addition, the probabilistic nature of the methods naturally confers a mechanism to quantify uncertainty over the super-resolved output. We demonstrate through experiments on both healthy and pathological brains the potential utility of such an uncertainty measure in the risk assessment of the super-resolved images for subsequent clinical use.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.