2019
DOI: 10.48550/arxiv.1910.12263
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Prior specification via prior predictive matching: Poisson matrix factorization and beyond

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…The optimisation problem of minimising Equation ( 1) is often underspecified. Specifically, there are many optimal values λ * that yield values of D(λ * | X) that are practically indistinguishable (noted by da Silva et al, 2019), and yet the prior distributions P(θ | λ * , X) and the corresponding marginals for components of θ can differ immensely. In terms of our desiderata, there are many equally faithful priors (which immediately implies a lack of uniqueness), thus we have an optimisation problem with solutions that are difficult to replicate due to nonuniqueness.…”
Section: Regularising Estimates Of λ * By Promoting the Marginal Stan...mentioning
confidence: 99%
“…The optimisation problem of minimising Equation ( 1) is often underspecified. Specifically, there are many optimal values λ * that yield values of D(λ * | X) that are practically indistinguishable (noted by da Silva et al, 2019), and yet the prior distributions P(θ | λ * , X) and the corresponding marginals for components of θ can differ immensely. In terms of our desiderata, there are many equally faithful priors (which immediately implies a lack of uniqueness), thus we have an optimisation problem with solutions that are difficult to replicate due to nonuniqueness.…”
Section: Regularising Estimates Of λ * By Promoting the Marginal Stan...mentioning
confidence: 99%
“…Conventional probabilistic factor models [26], [27], [28], [29] construct the likelihood function over the rating matrix. However, in our model, the likelihood function is given over only observed user-item interactions, and the benefit is that we can incorporate additional information into the model, such as biases, attributes of a user.…”
Section: Introductionmentioning
confidence: 99%