2013
DOI: 10.1093/bioinformatics/btt075
|View full text |Cite
|
Sign up to set email alerts
|

Accounting for non-genetic factors by low-rank representation and sparse regression for eQTL mapping

Abstract: Supplementary data are available at Bioinformatics online.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
107
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 46 publications
(107 citation statements)
references
References 32 publications
0
107
0
Order By: Relevance
“…We add noises to the prior networks by randomly shuffling the elements in them. Furthermore, we use the signal-to-noise ratio defined as SNR=WXΞ+E (Yang et al , 2013) to measure the noise ratio in the eQTL datasets. Here, we fix C=10,τ=0.1, and use different σ ’s to control SNR.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We add noises to the prior networks by randomly shuffling the elements in them. Furthermore, we use the signal-to-noise ratio defined as SNR=WXΞ+E (Yang et al , 2013) to measure the noise ratio in the eQTL datasets. Here, we fix C=10,τ=0.1, and use different σ ’s to control SNR.…”
Section: Methodsmentioning
confidence: 99%
“…LORS (Yang et al , 2013) uses a low-rank matrix LN×D to account for the variations caused by hidden factors. The objective function of LORS is minW,μ,L12||ZWXμ1L|false|F2+η||W|false|1+λ||L|false|* where ||·|false|* is the nuclear norm, η is the empirical parameter for the 1 penalty to control the sparsity of W and λ is the regularization parameter to control the rank of L .…”
Section: Background: Linear Regression With Graph Regularizermentioning
confidence: 99%
“…By enlarging the forward searching scope and reduce the backward searching radius, the problem can be solved. An acceleration factor range, as defined in (24), was introduced to reduce the computation time.…”
Section: Algorithm Descriptionmentioning
confidence: 99%
“…The most direct approach for SVT is applying full SVD through svd and then soft-threshold the singular values. This approach is in practice used in many matrix learning problems according to the distributed code, e.g., Kalofolias, Bresson, Bronstein, and Vandergheynst (2014); Chi et al (2013); Parikh and Boyd (2013) ;Yang, Wang, Zhang, and Zhao (2013);Zhou, Liu, Wan, and Yu (2014); Zhou and Li (2014); Zhang et al (2017); Otazo, Candès, and Sodickson (2015); Goldstein, Studer, and Baraniuk (2015), to name a few. However, the built-in function svd is for full SVD of a dense matrix, and hence is very time-consuming and computationally expensive for large-scale problems.…”
Section: Introductionmentioning
confidence: 99%