We investigate, by an extensive quality evaluation approach, performances and potential side effects introduced in Computed Tomography (CT) images by Deep Learning (DL) processing. Method: We selected two relevant processing steps, denoise and segmentation, implemented by two Convolutional Neural Networks (CNNs) models based on autoencoder architecture (encoder-decoder and UNet) and trained for the two tasks. In order to limit the number of uncontrolled variables, we designed a phantom containing cylindrical inserts of different sizes, filled with iodinated contrast media. A large CT image dataset was collected at different acquisition settings and two reconstruction algorithms. We characterized the CNNs behavior using metrics from the signal detection theory, radiological and conventional image quality parameters, and finally unconventional radiomic features analysis. Results: The UNet, due to the deeper architecture complexity, outperformed the shallower encoder-decoder in terms of conventional quality parameters and preserved spatial resolution. We also studied how the CNNs modify the noise texture by using radiomic analysis, identifying sensitive and insensitive features to the denoise processing. Conclusions: The proposed evaluation approach proved effective to accurately analyze and quantify the differences in CNNs behavior, in particular with regard to the alterations introduced in the processed images. Our results suggest that even a deeper and more complex network, which achieves good performances, is not necessarily a better network because it can modify texture features in an unwanted way.
We are interested in learning data-driven representations that can generalize well, even when trained on inherently biased data. In particular, we face the case where some attributes (bias) of the data, if learned by the model, can severely compromise its generalization properties. We tackle this problem through the lens of information theory, leveraging recent findings for a differentiable estimation of mutual information. We propose a novel end-to-end optimization strategy, which simultaneously estimates and minimizes the mutual information between the learned representation and the data attributes. When applied on standard benchmarks, our model shows comparable or superior classification performance with respect to state-of-the-art approaches. Moreover, our method is general enough to be applicable to the problem of "algorithmic fairness", with competitive results.
We investigate and characterize the inherent resilience of conditional Generative Adversarial Networks (cGANs) against noise in their conditioning labels, and exploit this fact in the context of Unsupervised Domain Adaptation (UDA). In UDA, a classifier trained on the labelled source set can be used to infer pseudo-labels on the unlabelled target set. However, this will result in a significant amount of misclassified examples (due to the well-known domain shift issue), which can be interpreted as noise injection in the ground-truth labels for the target set. We show that cGANs are, to some extent, robust against such "shift noise". Indeed, cGANs trained with noisy pseudo-labels, are able to filter such noise and generate cleaner target samples. We exploit this finding in an iterative procedure where a generative model and a classifier are jointly trained: in turn, the generator allows to sample cleaner data from the target distribution, and the classifier allows to associate better labels to target samples, progressively refining target pseudo-labels. Results on common benchmarks show that our method performs better or comparably with the unsupervised domain adaptation state of the art.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.