2022
DOI: 10.1214/21-ba1279
|View full text |Cite
|
Sign up to set email alerts
|

The Attraction Indian Buffet Distribution

Abstract: We propose the attraction Indian buffet distribution (AIBD), a distribution for binary feature matrices influenced by pairwise similarity information. Binary feature matrices are used in Bayesian models to uncover latent variables (i.e., features) that explain observed data. The Indian buffet process (IBP) is a popular exchangeable prior distribution for latent feature matrices. In the presence of additional information, however, the exchangeability assumption is not reasonable or desirable. The AIBD can incor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
2

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 12 publications
0
5
0
Order By: Relevance
“…Gershman et al (2015) proposed using the maximum a posteriori (MAP) feature allocation in the context of the distance dependent Indian Buffet Process (ddIBP). Warr et al (2021) successfully implemented this method in a classification study of Alzheimer's disease neuroimaging. Xu et al (2015) devised an objective function, similar to a penalized likelihood function, to find the MAP.…”
Section: Zero-one Lossmentioning
confidence: 99%
See 4 more Smart Citations
“…Gershman et al (2015) proposed using the maximum a posteriori (MAP) feature allocation in the context of the distance dependent Indian Buffet Process (ddIBP). Warr et al (2021) successfully implemented this method in a classification study of Alzheimer's disease neuroimaging. Xu et al (2015) devised an objective function, similar to a penalized likelihood function, to find the MAP.…”
Section: Zero-one Lossmentioning
confidence: 99%
“…The studies discussed in the previous section included the simulations from the AIBD paper (Warr et al, 2021), the cytometry data model (Lui et al, 2021), and the DFA model for patient-disease relationships (Ni et al, 2020b). The loss functions in these studies were different; the first used zero-one loss, the second used a sum of squares function on the PSM, and the last used Hamming distance.…”
Section: Existing Search Algorithms For Feature Allocationsmentioning
confidence: 99%
See 3 more Smart Citations