2021
DOI: 10.48550/arxiv.2106.06610
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Scalars are universal: Equivariant machine learning, structured like classical physics

Abstract: There has been enormous progress in the last few years in designing conceivable (though not always practical) neural networks that respect the gauge symmetries-or coordinate freedom-of physical law. Some of these frameworks make use of irreducible representations, some make use of higher order tensor objects, and some apply symmetry-enforcing constraints. Different physical laws obey different combinations of fundamental symmetries, but a large fraction (possibly all) of classical physics is equivariant to tra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 53 publications
(75 reference statements)
0
4
0
Order By: Relevance
“…Symmetry under rotations can be built by using as features the distance and the scalar products of the directions of two vectors defining the edge (Villar et al 2021). Let us define the unit vectors…”
Section: Edge Featuresmentioning
confidence: 99%
“…Symmetry under rotations can be built by using as features the distance and the scalar products of the directions of two vectors defining the edge (Villar et al 2021). Let us define the unit vectors…”
Section: Edge Featuresmentioning
confidence: 99%
“…Symmetry under rotations can be built by using as features the distance and the scalar products of the directions of two vectors defining the edge (Villar et al 2021). Lets define the unit vectors…”
Section: Edge Featuresmentioning
confidence: 99%
“…While finalizing this work, we were made aware of the concurrent papers [37,46], with an approach that is related to ours. In fact, Proposition 10 of [37] is similar to our Theorem 2 but for the group O(2) instead of SO(2) (in fact, they deal with a d-dimensional underlying space and the group O(d)). In particular, we make a more thorough description and analysis of neural network architectures.…”
Section: Related Workmentioning
confidence: 99%