As neurodegenerative disease pathological hallmarks have been reported in both grey matter (GM) and white matter (WM) with different density distributions, automating the segmentation process of GM/WM would be extremely advantageous for aiding in neuropathologic deep phenotyping. Standard segmentation methods typically involve manual annotations, where a trained researcher traces the delineation of GM/WM in ultra-high-resolution Whole Slide Images (WSIs). This method can be timeconsuming and subjective, preventing a scalable analysis on pathology images. This paper proposes an automated segmentation pipeline (BrainSec) combining a Convolutional Neural Network (CNN) module for segmenting GM/WM regions and a post-processing module to remove artifacts/residues of tissues. The final output generates XML annotations that can be visualized via Aperio ImageScope. First, we investigate two baseline models for medical image segmentation: FCN, and U-Net. Then we propose a patch-based approach, BrainSec, to classify the GM/WM/background regions. We demonstrate BrainSec is robust and has reliable performance by testing it on over 180 WSIs that incorporate numerous unique cases as well as distinct neuroanatomic brain regions. We also apply gradient-weighted class activation mapping (Grad-CAM) to interpret the segmentation masks and provide relevant explanations and insights. In addition, we have integrated BrainSec with an existing Amyloid-β pathology classification model into a unified framework (without incurring significant computation complexity) to identify pathologies, visualize their distributions, and quantify each type of pathologies in segmented GM/WM regions, respectively.
Neurodegenerative disease pathologies have been reported in both grey matter (GM) and white matter (WM) with different density distributions, an automated separation task of GM/WM would be extremely advantageous for aid in neuropathologic deep phenotyping. Standard segmentation methods typically involve manual annotations, where a trained researcher traces the delineation of GM/WM in ultra-high-resolution Whole Slide Images (WSIs). This method can be time-consuming and subjective, preventing the analysis of large amounts of WSIs in a scalable way. In this paper, we propose an automated segmentation pipeline combining a Convolutional Neural Network (CNN) module for segmenting GM/WM regions and a post-processing module to remove artifacts/residues of tissues as well as generate XML annotations that can be visualized via Aperio ImageScope. First, we investigate two baseline models for medical image segmentation: FCN, and U-Net. Then we propose a new patch-based approach, ResNet-Patch, to classify the GM/WM/background regions. In addition, we integrate a Neural Conditional Random Field (NCRF) module, ResNet-NCRF, to model and incorporate the spatial correlations among neighboring patches. Although their mechanisms are greatly different, both U-Net and ResNet-Patch/ResNet-NCRF achieve Intersection over Union (IoU) of more than 90% in GM and more than 80% in WM, while ResNet-Patch achieves 1% superior to U-Net with lower variance among various WSIs. ResNet-NCRF further improves the IoU by 3% for WM compared to ResNet-Patch before post-processing. We also apply gradient-weighted class activation mapping (Grad-CAM) to interpret the segmentation masks and provide clinical explanations and insights.
Fig. 1: Left (our pipeline): given a real scene (I), we fuse and segment the camera observations to obtain object level point clouds (II), which we use to construct a digital replica of the real scene (III). The replica is used to generate grasp labels (IV) to obtain trained grasping networks (V). The grasp poses predicted by the trained networks are evaluated in the real scene (VI). Right (the Real2Sim step): we can automatically place the reconstructed meshes in the digital replica without having to explicitly perform pose estimation. Given an object-level point cloud, we use a trained ConvONet to reconstruct the mesh (I), and then apply the inverse normalization operation (II) to obtain the mesh represented in the world frame (III).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.