Group equivariant convolutional neural networks (G-CNNs) have been successfully applied in geometric deep learning. Typically, G-CNNs have the advantage over CNNs that they do not waste network capacity on training symmetries that should have been hard-coded in the network. The recently introduced framework of PDE-based G-CNNs (PDE-G-CNNs) generalises G-CNNs. PDE-G-CNNs have the core advantages that they simultaneously 1) reduce network complexity, 2) increase classification performance, and 3) provide geometric interpretability. Their implementations primarily consist of linear and morphological convolutions with kernels. In this paper we show that the previously suggested approximative morphological kernels do not always accurately approximate the exact kernels accurately. More specifically, depending on the spatial anisotropy of the Riemannian metric, we argue that one must resort to sub-Riemannian approximations. We solve this problem by providing a new approximative kernel that works regardless of the anisotropy. We provide new theorems with better error estimates of the approximative kernels, and prove that they all carry the same reflectional symmetries as the exact ones. We test the effectiveness of multiple approximative kernels within the PDE-G-CNN framework on two datasets, and observe an improvement with the new approximative kernels. We report that the PDE-G-CNNs again allow for a considerable reduction of network complexity while having comparable or better performance than G-CNNs and CNNs on the two datasets. Moreover, PDE-G-CNNs have the advantage of better geometric interpretability over G-CNNs, as the morphological kernels are related to association fields from neurogeometry.
Group equivariant convolutional neural networks (G-CNNs) have been successfully applied in geometric deep learning. Typically, G-CNNs have the advantage over CNNs that they do not waste network capacity on training symmetries that should have been hard-coded in the network. The recently introduced framework of PDE-based G-CNNs (PDE-G-CNNs) generalizes G-CNNs. PDE-G-CNNs have the core advantages that they simultaneously (1) reduce network complexity, (2) increase classification performance, and (3) provide geometric interpretability. Their implementations primarily consist of linear and morphological convolutions with kernels. In this paper, we show that the previously suggested approximative morphological kernels do not always accurately approximate the exact kernels accurately. More specifically, depending on the spatial anisotropy of the Riemannian metric, we argue that one must resort to sub-Riemannian approximations. We solve this problem by providing a new approximative kernel that works regardless of the anisotropy. We provide new theorems with better error estimates of the approximative kernels, and prove that they all carry the same reflectional symmetries as the exact ones. We test the effectiveness of multiple approximative kernels within the PDE-G-CNN framework on two datasets, and observe an improvement with the new approximative kernels. We report that the PDE-G-CNNs again allow for a considerable reduction of network complexity while having comparable or better performance than G-CNNs and CNNs on the two datasets. Moreover, PDE-G-CNNs have the advantage of better geometric interpretability over G-CNNs, as the morphological kernels are related to association fields from neurogeometry.
We introduce a data-driven version of the plus Cartan connection on the homogeneous space $\mathbb{M}_2$ of 2D positions and orientations. We formulate a theorem that describes all shortest and straight curves (parallel velocity and parallel momentum, respectively) with respect to this new data-driven connection and corresponding Riemannian manifold. Then we use these shortest curves for geodesic tracking of complex vasculature in multi-orientation image representations defined on $\mathbb{M}_{2}$. The data-driven Cartan connection characterizes the Hamiltonian flow of all geodesics. It also allows for improved adaptation to curvature and misalignment of the (lifted) vessel structure that we track via globally optimal geodesics. We compute these geodesics numerically via steepest descent on distance maps on $\mathbb{M}_2$ that we compute by a new modified anisotropic fast-marching method. Our experiments range from tracking single blood vessels with fixed endpoints to tracking complete vascular trees in retinal images. Single vessel tracking is performed in a single run in the multi-orientation image representation, where we project the resulting geodesics back onto the underlying image. The complete vascular tree tracking requires only two runs and avoids prior segmentation, placement of extra anchor points, and dynamic switching between geodesic models. Altogether we provide a geodesic tracking method using a single, flexible, transparent, data-driven geodesic model providing globally optimal curves which correctly follow highly complex vascular structures in retinal images. All experiments in this article can be reproduced via documented \emph{Mathematica} notebooks available at \cite{githubNicky}.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.