2020
DOI: 10.1093/bioinformatics/btaa478
|View full text |Cite
|
Sign up to set email alerts
|

Privacy-preserving construction of generalized linear mixed model for biomedical computation

Abstract: Motivation The generalized linear mixed model (GLMM) is an extension of the generalized linear model (GLM) in which the linear predictor takes random effects into account. Given its power of precisely modeling the mixed effects from multiple sources of random variations, the method has been widely used in biomedical computation, for instance in the genome-wide association studies (GWASs) that aim to detect genetic variance significantly associated with phenotypes such as human diseases. Colla… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
28
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(30 citation statements)
references
References 32 publications
1
28
0
Order By: Relevance
“…The generalized linear mixed model (GLMM), which takes the heterogeneous factors into consideration, is more amenable to accommodate the heterogeneity across healthcare systems. There have been very few studies in this area and one relevant work is a privacy-preserving Bayesian GLMM model [8], which proposed an Expectation-Maximization (EM) algorithm to fit the model collaboratively on horizontally partitioned data. The convergence process is relatively slow (due to the Metropolis-Hastings sampling in the E-step) and it is also not very stable (likely to be trapped in local optima [9] in high-dimensional data).…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…The generalized linear mixed model (GLMM), which takes the heterogeneous factors into consideration, is more amenable to accommodate the heterogeneity across healthcare systems. There have been very few studies in this area and one relevant work is a privacy-preserving Bayesian GLMM model [8], which proposed an Expectation-Maximization (EM) algorithm to fit the model collaboratively on horizontally partitioned data. The convergence process is relatively slow (due to the Metropolis-Hastings sampling in the E-step) and it is also not very stable (likely to be trapped in local optima [9] in high-dimensional data).…”
Section: Related Workmentioning
confidence: 99%
“…The convergence process is relatively slow (due to the Metropolis-Hastings sampling in the E-step) and it is also not very stable (likely to be trapped in local optima [9] in high-dimensional data). In the experiment, a loose threshold (i.e., 0.08) was used as a convergence condition [8] while typical federated learning algorithms [10] in healthcare use much stringent convergence threshold (i.e., 10 −6 ).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…More recently, a few methods have been developed which consider using site-level random effects to account for heterogeneity across sites. For example, Luo et al proposed a lossless algorithm for the linear mixed model [30], and a few methods have been proposed for GLMM [31][32][33]. However, these approaches only consider site-level random effects, which cannot handle repeated and correlated measures within each site.…”
Section: Introductionmentioning
confidence: 99%
“…There are some existing efforts on developing distributed algorithms for fitting GLMM. For example, Zhu, et al 2020 proposed a distributed algorithm based on Expectation–Maximization (EM) algorithm [8]. However, it is well known that the EM algorithm usually takes many iterations to converge.…”
Section: Introductionmentioning
confidence: 99%