2019
DOI: 10.1093/imaiai/iaz022
|View full text |Cite
|
Sign up to set email alerts
|

On the S-instability and degeneracy of discrete deep learning models

Abstract: A probability model exhibits instability if small changes in a data outcome result in large and, often unanticipated, changes in probability. This instability is a property of the probability model, given by a distributional form and a given configuration of parameters. For correlated data structures found in several application areas, there is increasing interest in identifying such sensitivity in model probability structure. We consider the problem of quantifying instability for general probability models de… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
5
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 12 publications
(30 reference statements)
1
5
0
Order By: Relevance
“…In other words, S‐unstable RBM model sequences are guaranteed to stack up all probability on a specific set of outcomes for visibles, which potentially could be arbitrarily narrow. Proofs of Propositions and follow from more general results in Kaplan et al . These findings also have counterparts in results in Schweinberger , but unlike results there, are not limited in consideration to exponential family forms with a fixed number of parameters.…”
Section: Introductionsupporting
confidence: 55%
“…In other words, S‐unstable RBM model sequences are guaranteed to stack up all probability on a specific set of outcomes for visibles, which potentially could be arbitrarily narrow. Proofs of Propositions and follow from more general results in Kaplan et al . These findings also have counterparts in results in Schweinberger , but unlike results there, are not limited in consideration to exponential family forms with a fixed number of parameters.…”
Section: Introductionsupporting
confidence: 55%
“…That is, we consider μ = z j * where j * = arg max{n 1 , n 2 , … , n 2 p }. (10) To develop an estimator for the variability parameter 𝛼, temporarily fix the central pattern 𝝁 in form (9). Let m i be the number of pixel flips that x i is away from 𝝁.…”
Section: Likelihood-based Inference In a One-sample Problemmentioning
confidence: 99%
“…Note that in the first fraction in display (13) the denominator includes a single term and the numerator is a sum over at most p nonzero terms for a choice of 𝝁. Then in light of forms (10) and (13), one (crude) estimator for 𝛼 is α(μ).…”
Section: Likelihood-based Inference In a One-sample Problemmentioning
confidence: 99%
See 1 more Smart Citation
“…In other words, S-unstable RBM model sequences are guaranteed to stack up all probability on a specific set of outcomes for visibles, which could potentially be arbitrarily narrow. Proofs of Propositions 1 and 2 appear in Kaplan, Nordman, and Vardeman (2017). These findings also have counterparts in results in Schweinberger (2011), but unlike results there, we do not limit consideration to exponential family forms with a fixed number of parameters.…”
Section: Instabilitymentioning
confidence: 51%