Existing studies on disease diagnostic models focus either on diagnostic model learning for performance improvement or on the visual explanation of a trained diagnostic model. We propose a novel learn-explain-reinforce (LEAR) framework that unifies diagnostic model learning, visual explanation generation (explanation unit), and trained diagnostic model reinforcement (reinforcement unit) guided by the visual explanation. For the visual explanation, we generate a counterfactual map that transforms an input sample to be identified as an intended target label. For example, a counterfactual map can localize hypothetical abnormalities within a normal brain image that may cause it to be diagnosed with Alzheimer's disease (AD). We believe that the generated counterfactual maps represent data-driven and model-induced knowledge about a target task, i.e., AD diagnosis using structural MRI, which can be a vital source of information to reinforce the generalization of the trained diagnostic model. To this end, we devise an attention-based feature refinement module with the guidance of the counterfactual maps. The explanation and reinforcement units are reciprocal and can be operated iteratively. Our proposed approach was validated via qualitative and quantitative analysis on the ADNI dataset. Its comprehensibility and fidelity were demonstrated through ablation studies and comparisons with existing methods.
Deep learning for Alzheimer's disease (AD) prediction has provided timely prevention of disease progression yet still demands attentive interpretability. Recently, counterfactual reasoning has increasingly been exploited in medical research by providing refined visual explanatory maps. However, such visual explanatory maps alone are not self-sufficient unless we can intuitively demonstrate their validity via quantitative features. For this, we first synthesize the counterfactual-labeled structural MRI using our proposed framework. We further transform it as a gray matter density map to precisely measure its volumetric changes over the parcellated region of interests (ROIs). Furthermore, to boost the effectiveness of selected ROIs while promoting interpretability and achieving comparable predictive performance to deep learning methods, we devise a novel lightweight counterfactual-guided attentive feature representation constructed to a linear classifier. Thus, our framework provides an AD-relatedness index for each ROI, offering an intuitive understanding of brain status for an individual subject and across subjects regarding AD.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.