Automated brain tumor segmentation is challenging given the tumor's variability in size, shape, and image intensity. This paper focuses on the fusion of multimodal information coming from different Magnetic Resonance (MR) imaging sequences. We argue it is important to exploit all the modality complementarity to better segment and later determine the aggressiveness of tumors. However, simply concatenating the multimodal data as channels of a single image generates a high volume of redundant information. Therefore, we propose a supervoxel-based approach that regroups pixels sharing perceptually similar information across the different modalities to produce a single coherent oversegmentation. To further reduce redundant information while keeping meaningful borders, we include a variance constraint and a supervoxel merging step. Our experimental validation shows that the proposed merging strategy produces high-quality clustering results useful for brain tumor segmentation. Indeed, our method reaches an ASA score of 0.712 compared to 0.316 for the monomodal approach, indicating that the supervoxels accommodate well tumor boundaries. Our approach also improves by 11.5% the Global Score (GS), showing clusters effectively group pixels similar in intensity and texture.
Early breast cancer diagnosis and lesion detection have been made possible through medical imaging modalities such as mammography. However, the interpretation of mammograms by a radiologist is still challenging. In this paper, we tackle the problems of whole mammogram classification and local abnormality detection, respectively, with supervised and weakly-supervised approaches. To address the multi-scale nature of the problem, we first extract superpixels at different scales. We then introduce graph connexions between superpixels (within and across scales) to better model the lesion's size and shape variability. On top of the multi-scale graph, we design a Graph Neural Network (GNN) trained in a supervised manner to predict a binary class for each input image. The GNN summarizes the information from different regions, learning features that depend not only on local textures but also on the superpixels' geometrical distribution and topological relations. Finally, we design the last layer of the GNN to be a global pooling operation to allow for a weakly-supervised training of the abnormality detection task, following the principles of Multiple Instance Learning (MIL). The predictions of the last-but-one GNN layer result in a superpixelized heatmap of the abnormality probabilities, leading to a weakly-supervised abnormality detector with low annotations requirements (i.e., trained with imagewise labels only). Experiments on one private and one publicly available datasets show that our superpixel-based multi-scale GNN improves the classification results over prior weakly supervised approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.