2021
DOI: 10.1214/20-ba1197
|View full text |Cite
|
Sign up to set email alerts
|

Centered Partition Processes: Informative Priors for Clustering (with Discussion)

Abstract: There is a very rich literature proposing Bayesian approaches for clustering starting with a prior probability distribution on partitions. Most approaches assume exchangeability, leading to simple representations in terms of Exchangeable Partition Probability Functions (EPPF). Gibbstype priors encompass a broad class of such cases, including Dirichlet and Pitman-Yor processes. Even though there have been some proposals to relax the exchangeability assumption, allowing covariate-dependence and partial exchangea… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 52 publications
(48 reference statements)
0
3
0
Order By: Relevance
“…Other proposals [112–114] aim to mitigate the rich-get-richer property of the DP to prefer highly imbalanced clusters; however, exchangeability often no longer holds. Subjective priors can also be specified which further enrich the parameter space by centring around prior information on the clustering structure [115,116]. In general, a BNP prior can be placed directly on the mixing measure H, which induces a prior on both the sequence of weights and the random partition.…”
Section: Bayesian Cluster Analysismentioning
confidence: 99%
“…Other proposals [112–114] aim to mitigate the rich-get-richer property of the DP to prefer highly imbalanced clusters; however, exchangeability often no longer holds. Subjective priors can also be specified which further enrich the parameter space by centring around prior information on the clustering structure [115,116]. In general, a BNP prior can be placed directly on the mixing measure H, which induces a prior on both the sequence of weights and the random partition.…”
Section: Bayesian Cluster Analysismentioning
confidence: 99%
“…In a second moment, to elicit a value for η 0 , we would reason by calibration, in the spirit of Paganin et al (2021). Consider the prior expectation f (η 0 , ρ 0 ) := E π(ρ)…”
Section: Elicitation Of the Hyper-parametersmentioning
confidence: 99%
“…, ξ G ) and each column of Λ j . We remark that if certain prior knowledge about the variable groups is available for the data, then it is also possible to employ informative priors such as those in Paganin et al (2021) for the s j 's. For the Dirichlet parameters α, defining α 0 = K k=1 α k and η = (α 1 /α 0 , .…”
Section: Bayesian Inferencementioning
confidence: 99%