2019
DOI: 10.1561/9781680836233
|View full text |Cite
|
Sign up to set email alerts
|

An Introduction to Variational Autoencoders

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
1,447
0
4

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 1,419 publications
(2,008 citation statements)
references
References 0 publications
3
1,447
0
4
Order By: Relevance
“…where q i is the model of the mean number of coincidences in the ith line of response (LOR) (or sinogram bin). Next, it is necessary to define an objective function which indicates how well the parameters x of the model for (1) correspond to the actual measured data, modeled by (2) and (3). The goal of image reconstruction is then to find the parameter vector x, for (1), which when forward modeled with (2), best agrees with the acquired noisy measured data (3), according to a chosen objective (or cost) function as follows:…”
Section: A Basic Principlesmentioning
confidence: 99%
See 1 more Smart Citation
“…where q i is the model of the mean number of coincidences in the ith line of response (LOR) (or sinogram bin). Next, it is necessary to define an objective function which indicates how well the parameters x of the model for (1) correspond to the actual measured data, modeled by (2) and (3). The goal of image reconstruction is then to find the parameter vector x, for (1), which when forward modeled with (2), best agrees with the acquired noisy measured data (3), according to a chosen objective (or cost) function as follows:…”
Section: A Basic Principlesmentioning
confidence: 99%
“…In the training datasets, for the case of supervised learning, example inputs are paired with their corresponding desired outputs. For unsupervised learning, the training data may consist of example inputs only (for learning of latent representations of the data [3]), or of unpaired example inputs and example outputs [4]. A further category, that of self-supervised learning [5], [6], needs only input data examples and instructions on how to create labels (rather than providing labels) thus reducing the need for human interaction with the learning process.…”
Section: Introductionmentioning
confidence: 99%
“…They were then suggested by Dimmick et al 9 as tools for training super resolution networks by using the features extracted by passing Hi-C data through a trained autoencoder as a loss function. In this manuscript we expand upon this strategy, but replace their network with a different flavor of network called the variational autoencoder 11 .…”
Section: (Eq 1)mentioning
confidence: 99%
“…While previous work often split chromosomes into training, validation and testing sets in a sequential manner 8 9 we were concerned that differences in the 3D conformation of large vs small chromosomes 16 may contain implicit bias in contact map features that could confound training. Consequently we assembled training, validation and test sets in a non sequential manner using chromosomes 1,3,5,6,7,9,11,12,13,15,17,18,19,21 as our training set, chromosome 2,8,10,22 as our validation set and chromosomes 4,14,16,20 as our test set.…”
Section: Dataset Assemblymentioning
confidence: 99%
“…A specific type of black-box model that is targeted to capturing the physics of a specific problem, is the generative model. More specifically, Generative Adversarial Networks (GANs) [7] and Variational Autoencoders (VAEs) [8] are neural networks that learn how to generate data that look like reality. It is believed that, by studying these types of neural networks and the way they produce data, further understanding of the underlying problems may be achieved.…”
Section: How Much Useful (Safe) Life Remains (Prognosis)?mentioning
confidence: 99%