2012
DOI: 10.1186/1471-2105-13-s13-s1
|View full text |Cite
|
Sign up to set email alerts
|

Sources of variation in false discovery rate estimation include sample size, correlation, and inherent differences between groups

Abstract: BackgroundHigh-throughtput technologies enable the testing of tens of thousands of measurements simultaneously. Identification of genes that are differentially expressed or associated with clinical outcomes invokes the multiple testing problem. False Discovery Rate (FDR) control is a statistical method used to correct for multiple comparisons for independent or weakly dependent test statistics. Although FDR control is frequently applied to microarray data analysis, gene expression is usually correlated, which … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
17
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 40 publications
(17 citation statements)
references
References 22 publications
0
17
0
Order By: Relevance
“…The lack of differences in the endometrium between fertility-classified heifers could be due to the use of algorithms designed for fewer replicates and homogenous differences between experimental groups [47]. Therefore, the data were reanalyzed with Bioconductor Limma with no false discovery rate (FDR).…”
Section: Resultsmentioning
confidence: 99%
“…The lack of differences in the endometrium between fertility-classified heifers could be due to the use of algorithms designed for fewer replicates and homogenous differences between experimental groups [47]. Therefore, the data were reanalyzed with Bioconductor Limma with no false discovery rate (FDR).…”
Section: Resultsmentioning
confidence: 99%
“…Its high level of technical reproducibility has also been demonstrated [4][5][6]. Despite these advantages, recent analyses have revealed larger technical variability for the quantification of genes expressed at lower levels [7] and biases introduced by RNA-Seq technologies and the normalization methods [8][9][10]. However, most of these studies lack the absolute "truth" to objectively assess the performance and biases of RNA-Seq technology and the impact of data analysis approaches.…”
mentioning
confidence: 99%
“…To evaluate the performance of the different metrics, we generated artificial gene expression datasets representing scenarios that differed in the number of latent subgroups present and in the within ( σ g ) and between ( σ p ) tumor sample variance using the R package Umpire [ 20 ]. In each scenario, the log expression level for each gene was generated by a hierarchical model, in which σ g controls the within-tumor variance and σ p controls the variance across patients in the cohort [ 21 ].…”
Section: Resultsmentioning
confidence: 99%