2021
DOI: 10.1088/2632-2153/ac2c5d
|View full text |Cite
|
Sign up to set email alerts
|

Categorical representation learning: morphism is all you need

Abstract: We provide a construction for categorical representation learning and introduce the foundations of ‘categorifier’. The central theme in representation learning is the idea of everything to vector. Every object in a dataset S can be represented as a vector in … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 3 publications
(3 reference statements)
0
4
0
Order By: Relevance
“…In AWIE, the authors combined isometric embedding and attribute weighting, effectively mitigating dimensionality expansion and improving classification performance. Another novel approach to categorical representation learning was termed the 'categorifier' by [39]. The proposed solution addressed the challenge of improving representation learning beyond traditional set-theoretic methods.…”
Section: A Embedding Techniquesmentioning
confidence: 99%
“…In AWIE, the authors combined isometric embedding and attribute weighting, effectively mitigating dimensionality expansion and improving classification performance. Another novel approach to categorical representation learning was termed the 'categorifier' by [39]. The proposed solution addressed the challenge of improving representation learning beyond traditional set-theoretic methods.…”
Section: A Embedding Techniquesmentioning
confidence: 99%
“…This would yield a diagram of the form This approach can also be applied to more complex datasets than the ones we have studied in this paper using the categorical language and adapting the dissimilarity metrics. One example of this can be found in [SY21]. One thing to note about [SY21] is that the authors assume we have a vector representation of a certain category C. The downside of this approach is that one does not have a way to check how biased the vector representation is.…”
Section: Future Workmentioning
confidence: 99%
“…One example of this can be found in [SY21]. One thing to note about [SY21] is that the authors assume we have a vector representation of a certain category C. The downside of this approach is that one does not have a way to check how biased the vector representation is. With that in mind, one could apply this method to the semantic dataset and have metrics of how biased, or how much bias is tolerated by, the algorithms.…”
Section: Future Workmentioning
confidence: 99%
“…
Following the earlier formalism of the categorical representation learning [25] by the first two authors, we discuss the construction of the "RG-flow based categorifier". Borrowing ideas from theory of renormalization group flows (RG) in quantum field theory, holographic duality, and hyperbolic geometry, and mixing them with neural ODE's, we construct a new algorithmic natural language processing (NLP) architecture, called the RG-flow categorifier or for short the RG categorifier, which is capable of data classification and generation in all layers.
…”
mentioning
confidence: 99%