2022
DOI: 10.48550/arxiv.2201.12204
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

From data to functa: Your data point is a function and you can treat it like one

Abstract: It is common practice in deep learning to represent a measurement of the world on a discrete grid, e.g. a 2D grid of pixels. However, the underlying signal represented by these measurements is often continuous, e.g. the scene depicted in an image. A powerful continuous alternative is then to represent these measurements using an implicit neural representation, a neural function trained to output the appropriate measurement value for any input spatial location. In this paper, we take this idea to its next level… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(17 citation statements)
references
References 39 publications
0
14
0
Order By: Relevance
“…Of particular importance to the remainder of our discussion around compression is the recent observation that signals can be accurately learned using merely data-item specific modulations to a shared base network (Perez et al, 2018;Mehta et al, 2021;Dupont et al, 2022a). Specifically, in the forward pass of a network, each layer l represents the transformation x → f (W (l) x + b (l) + m (l) ), where {W (l) , b (l) } are weights and biases shared between signal with only modulations m (l) being specific to each signal.…”
Section: Implicit Neural Representationsmentioning
confidence: 99%
See 4 more Smart Citations
“…Of particular importance to the remainder of our discussion around compression is the recent observation that signals can be accurately learned using merely data-item specific modulations to a shared base network (Perez et al, 2018;Mehta et al, 2021;Dupont et al, 2022a). Specifically, in the forward pass of a network, each layer l represents the transformation x → f (W (l) x + b (l) + m (l) ), where {W (l) , b (l) } are weights and biases shared between signal with only modulations m (l) being specific to each signal.…”
Section: Implicit Neural Representationsmentioning
confidence: 99%
“…It is worth pointing out that the minimisation of Equation ( 1) is extraordinarily expensive: Learning a single NeRF ( ) scene can take up to an entire day on a single GPU (Dupont et al, 2022a); even the compression of a single low-dimensional image requires thousands of iterative optimisation steps. Fortunately, we need not resort to tabula-rasa optimisation for each data item in turn: In recent years, developments in Meta-Learning (e.g.…”
Section: Meta-learningmentioning
confidence: 99%
See 3 more Smart Citations