PurposeTo develop a neural ordinary differential equation (ODE) model for visualizing deep neural network behavior during multi‐parametric MRI‐based glioma segmentation as a method to enhance deep learning explainability.MethodsBy hypothesizing that deep feature extraction can be modeled as a spatiotemporally continuous process, we implemented a novel deep learning model, Neural ODE, in which deep feature extraction was governed by an ODE parameterized by a neural network. The dynamics of (1) MR images after interactions with the deep neural network and (2) segmentation formation can thus be visualized after solving the ODE. An accumulative contribution curve (ACC) was designed to quantitatively evaluate each MR image's utilization by the deep neural network toward the final segmentation results.The proposed Neural ODE model was demonstrated using 369 glioma patients with a 4‐modality multi‐parametric MRI protocol: T1, contrast‐enhanced T1 (T1‐Ce), T2, and FLAIR. Three Neural ODE models were trained to segment enhancing tumor (ET), tumor core (TC), and whole tumor (WT), respectively. The key MRI modalities with significant utilization by deep neural networks were identified based on ACC analysis. Segmentation results by deep neural networks using only the key MRI modalities were compared to those using all four MRI modalities in terms of Dice coefficient, accuracy, sensitivity, and specificity.ResultsAll Neural ODE models successfully illustrated image dynamics as expected. ACC analysis identified T1‐Ce as the only key modality in ET and TC segmentations, while both FLAIR and T2 were key modalities in WT segmentation. Compared to the U‐Net results using all four MRI modalities, the Dice coefficient of ET (0.784→0.775), TC (0.760→0.758), and WT (0.841→0.837) using the key modalities only had minimal differences without significance. Accuracy, sensitivity, and specificity results demonstrated the same patterns.ConclusionThe Neural ODE model offers a new tool for optimizing the deep learning model inputs with enhanced explainability. The presented methodology can be generalized to other medical image‐related deep‐learning applications.