Since the rise of deep learning (DL) in the mid-2010s, cardiac magnetic resonance (CMR) image segmentation has achieved state-of-the-art performance. Despite achieving inter-observer variability in terms of different accuracy performance measures, visual inspections reveal errors in most segmentation results, indicating a lack of reliability and robustness of DL segmentation models, which can be critical if a model was to be deployed into clinical practice. In this work, we aim to bring attention to reliability and robustness, two unmet needs of cardiac image segmentation methods, which are hampering their translation into practice. To this end, we first study the performance accuracy evolution of CMR segmentation, illustrate the improvements brought by DL algorithms and highlight the symptoms of performance stagnation. Afterwards, we provide formal definitions of reliability and robustness. Based on the two definitions, we identify the factors that limit the reliability and robustness of state-of-the-art deep learning CMR segmentation techniques. Finally, we give an overview of the current set of works that focus on improving the reliability and robustness of CMR segmentation, and we categorize them into two families of methods: quality control methods and model improvement techniques. The first category corresponds to simpler strategies that only aim to flag situations where a model may be incurring poor reliability or robustness. The second one, instead, directly tackles the problem by bringing improvements into different aspects of the CMR segmentation model development process. We aim to bring the attention of more researchers towards these emerging trends regarding the development of reliable and robust CMR segmentation frameworks, which can guarantee the safe use of DL in clinical routines and studies.
Quantiser design for a nonlinear filter is considered in the context of a decentralised estimation system with communication constraints. The filter is based on quantised outputs of a discretetime, two-state Hidden Markov Model (HMM) as measured by two remote sensor nodes. The optimal quantisation scheme is obtained by maximising the mutual information between the quantised meaurements and the hidden Markov states. Filter performance is measured in terms of the probability of estimation error and is investigated through simulation for HMM's with both independent and correlated white Gaussian noise in the measurements. The performance of the filter based on continuous, unquantised signals provides a benchmark for the performance of the filter based on quantised measurements. Therefore a method for computing the probability of estimation error directly for the continuous filter is also presented.
We introduce a new learning framework that builds upon the recent progress achieved by methods for quality control (QC) of image segmentation to address the poor generalisation of deep learning models in Out-of-Distribution (OoD) data. Under the assumption that the label space is consistent across data coming from different distributions, we use the information provided by a QC module as a proxy of the segmentation model's performance in unseen data. If the model's performance is poor, the QC information is used as feedback to refine the training of the segmentation model, thus adapting to the OoD data. Our method was evaluated in the context of the Multi-Disease, Multi-View & Multi-Center Right Ventricular Segmentation in Cardiac MRI Challenge reporting average Dice Score and Hausdorff distance of 0.905 and 10.472, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.