2021
DOI: 10.48550/arxiv.2102.06942
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Rotation-Equivariant Deep Learning for Diffusion MRI

Philip Müller,
Vladimir Golkov,
Valentina Tomassini
et al.

Abstract: Convolutional networks are successful, but they have recently been outperformed by new neural networks that are equivariant under rotations and translations. These new networks work better because they do not struggle with learning each possible orientation of each image feature separately. So far, they have been proposed for 2D and 3D data. Here we generalize them to 6D diffusion MRI data, ensuring joint equivariance under 3D roto-translations in image space and the matching 3D rotations in q-space, as dictat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(14 citation statements)
references
References 28 publications
0
9
0
Order By: Relevance
“…Data augmentation in the context of symmetric tasks was studied previously in [13], where a method to align input data is presented and compared to data augmentation. Closest to our work is a comparison between an equivariant model and a non-equivariant model for reduced training data sizes for an MRI application in [22]. In contrast, we systematically compare data augmentation with increased training data sizes for different tasks, datasets and several non-equivariant models with an equivariant architecture.…”
Section: Related Literaturementioning
confidence: 99%
“…Data augmentation in the context of symmetric tasks was studied previously in [13], where a method to align input data is presented and compared to data augmentation. Closest to our work is a comparison between an equivariant model and a non-equivariant model for reduced training data sizes for an MRI application in [22]. In contrast, we systematically compare data augmentation with increased training data sizes for different tasks, datasets and several non-equivariant models with an equivariant architecture.…”
Section: Related Literaturementioning
confidence: 99%
“…for any nonlinear function α : R → R. Examples for α used in the literature include sigmoid [10], relu [43,27], shifted soft plus [37] and swish [1]. In [35], norm nonlinearities are used for matrix valued feature maps with || • || = Re(tr(•)) and α = relu.…”
Section: Equivariant Deep Network Architectures For Machine Learningmentioning
confidence: 99%
“…This is directly relevant for instance in the case of spherical signals where G is a rotation group. In practical applications, it was found that equivariance improves per-sample efficiency, reducing the need for data augmentation [1]. For linear models, this has been proven mathematically [2].…”
Section: Introductionmentioning
confidence: 97%
See 1 more Smart Citation
“…Novel convolution layers with different equivariance have been proposed so far, group equivariant convolution networks [12,15], steerable convolution networks and harmonic networks for rotation equivariance, scale equivariance [23,47], and permutation equivariance [44]. Such convolution layers equipped with various types of equivariance have been proven to be beneficial to better performance in tracking [41], classification, trajectory prediction [46], segmentation [30], and image generation [14]. However, these methods have never been applied to visual navigation.…”
Section: Equivariance and Invariancementioning
confidence: 99%